Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 6 hours 34 min ago

Steve Kemp: Building a computer - part 1

Thursday 11th of July 2019 10:01:00 AM

I've been tinkering with hardware for a couple of years now, most of this is trivial stuff if I'm honest, for example:

  • Wiring a display to a WiFi-enabled ESP8266 device.
    • Making it fetch data over the internet and display it.
  • Hooking up a temperature/humidity sensor to a device.
    • Submit readings to an MQ bus.

Off-hand I think the most complex projects I've built have been complex in terms of software. For example I recently hooked up a 933Mhz radio-receiver to an ESP8266 device, then had to reverse engineer the protocol of the device I wanted to listen for. I recorded a radio-burst using an SDR dongle on my laptop, broke the transmission into 1 and 0 manually, worked out the payload and then ported that code to the ESP8266 device.

Anyway I've decided I should do something more complex, I should build "a computer". Going old-school I'm going to stick to what I know best the Z80 microprocessor. I started programming as a child with a ZX Spectrum which is built around a Z80.

Initially I started with BASIC, later I moved on to assembly language mostly because I wanted to hack games for infinite lives. I suspect the reason I don't play video-games so often these days is because I'm just not very good without cheating ;)

Anyway the Z80 is a reasonably simple processor, available in a 40PIN DIP format. There are the obvious connectors for power, ground, and a clock-source to make the thing tick. After that there are pins for the address-bus, and pins for the data-bus. Wiring up a standalone Z80 seems to be pretty trivial.

Of course making the processor "go" doesn't really give you much. You can wire it up, turn on the power, and barring explosions what do you have? A processor executing NOP instructions with no way to prove it is alive.

So to make a computer I need to interface with it. There are two obvious things that are required:

  • The ability to get your code on the thing.
    • i.e. It needs to read from memory.
  • The ability to read/write externally.
    • i.e. Light an LED, or scan for keyboard input.

I'm going to keep things basic at the moment, no pun intended. Because I have no RAM, because I have no ROM, because I have no keyboard I'm going to .. fake it.

The Z80 has 40 pins, of which I reckon we need to cable up over half. Only the arduino mega has enough pins for that, but I think if I use a Mega I can wire it to the Z80 then use the Arduino to drive it:

  • That means the Arduino will generate a clock-signal to make the Z80 tick.
  • The arduino will monitor the address-bus
    • When the Z80 makes a request to read the RAM at address 0x0000 it will return something from its memory.
    • When the Z80 makes a request to write to the RAM at address 0xffff it will store it away in its memory.
  • Similarly I can monitor for requests for I/O and fake that.

In short the Arduino will run a sketch with a 1024 byte array, which the Z80 will believe is its memory. Via the serial console I can read/write to that RAM, or have the contents hardcoded.

I thought I was being creative with this approach, but it seems like it has been done before, numerous times. For example:

  • http://baltazarstudios.com/arduino-zilog-z80/
  • https://forum.arduino.cc/index.php?topic=60739.0
  • https://retrocomputing.stackexchange.com/questions/2070/wiring-a-zilog-z80

Anyway I've ordered a bunch of Z80 chips, and an Arduino mega (since I own only one Arduino, I moved on to ESP8266 devices pretty quickly), so once it arrives I'll document the process further.

Once it works I'll need to slowly remove the arduino stuff - I guess I'll start by trying to build an external RAM/ROM interface, or an external I/O circuit. But basically:

  • Hook the Z80 up to the Arduino such that I can run my own code.
  • Then replace the arduino over time with standalone stuff.

The end result? I guess I have no illusions I can connect a full-sized keyboard to the chip, and drive a TV. But I bet I could wire up four buttons and an LCD panel. That should be enough to program a game of Tetris in Z80 assembly, and declare success. Something like that anyway :)

Expect part two to appear after my order of parts arrives from China.

Sven Hoexter: Frankenstein JVM with flavour - jlink your own JVM with OpenJDK 11

Wednesday 10th of July 2019 02:31:27 PM

While you can find a lot of information regarding the Java "Project Jigsaw", I could not really find a good example on "assembling" your own JVM. So I took a few minutes to figure that out. My usecase here is that someone would like to use Instana (non free tracing solution) which requires the java.instrument and jdk.attach module to be available. From an operations perspektive we do not want to ship the whole JDK in our production Docker Images, so we've to ship a modified JVM. Currently we base our images on the builds provided by AdoptOpenJDK.net, so my examples are based on those builds. You can just download and untar them to any directory to follow along.

You can check the available modules of your JVM by runnning:

$ jdk-11.0.3+7-jre/bin/java --list-modules | grep -E '(instrument|attach)' java.instrument@11.0.3

As you can see only the java.instrument module is available. So let's assemble a custom JVM which includes all the modules provided by the default AdoptOpenJDK.net JRE builds and the missing jdk.attach module:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.attach --output myjvm $ ./myjvm/bin/java --list-modules | grep -E '(instrument|attach)' java.instrument@11.0.3 jdk.attach@11.0.3

Size wise the increase is, as expected, rather minimal:

$ du -hs myjvm jdk-11.0.3+7-jre jdk-11.0.3+7 141M myjvm 121M jdk-11.0.3+7-jre 310M jdk-11.0.3+7

For the fun of it you could also add the compiler so you can execute source files directly:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.compiler --output myjvm2 $ ./myjvm2/bin/java HelloWorld.java Hello World!

Jonathan Dowland: Bose on-ear wireless headphones

Wednesday 10th of July 2019 09:27:47 AM

Azoychka modelling the headphones

Earlier this year, and after about five years, I've had to accept that my beloved AKG K451 fold-able headphones have finally died, despite the best efforts of a friendly colleague in the Newcastle Red Hat office, who had replaced and re-soldered all the wiring through the headband, and disassembled the left ear-cup to remove a stray metal ring that got jammed in the jack, most likely snapped from one of several headphone wires I'd gone through.

The K451's were really good phones. They didn't sound quite as good as my much larger, much less portable Sennheisers, but the difference was close, and the portability aspect gave them fantastic utility. They remained comfortable to wear and listen to for hours on end, and were surprisingly low-leaking. I became convinced that on-ear was a good form factor for portable headphones.

To replace them, I decided to finally give wireless headphones a try. There are not a lot of on-ear, smaller form-factor wireless headphone models. I really wanted to like the Sony WH-H800s, which (I thought) looked stylish, and reviews for their bigger brother (the 1000 series over-ear) are fantastic. The 800s proved very hard to audition. I could only find one vendor in Newcastle with a pair for evaluation, Currys PC World, but the circumstances were very poor: a noisy store, the headphones tethered to a security frame on a very short leash, so I had to stoop to put them on; no ability to try my own music through the headset. The headset in actuality seemed poorly constructed, with the hard plastic seeming to be ill-fitting such that the headset rattled when I picked it up.

I therefore ended up buying the Bose on-ear wireless headphones. I was able to audition them in several different environments, using my own music, both over Bluetooth and via a cable. They are very comfortable, which is important for the use-case. I was a little nervous about reports on Bose sound quality, which is described as more sculpted than true to the source material, but I was happy with what I could hear in my demonstrations. What clinched it was a few other circumstances (that I won't elaborate on here) which brought the price down to comparable to what I paid for the AKG K451s.

A few months in, and the only criticism I have of the Bose headphones is I can get some mild discomfort on my helix if I have positioned them poorly. This has not turned out to be a big problem. One consequence of having wireless headphones, asides from increased convenience in the same listening circumstances I used wired headphones, is all the situations that I can use them that I wouldn't have bothered before, including a far wider range of house work chores, going up and down ladders, DIY jobs, etc. I'm finding myself consuming a lot more podcasts and programmes from BBC Radio, and experimenting more with streaming music.

Matthew Garrett: Bug bounties and NDAs are an option, not the standard

Tuesday 9th of July 2019 09:15:21 PM
Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

(a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

comments

Sean Whitton: Upload to Debian with just 'git tag' and 'git push'

Tuesday 9th of July 2019 08:49:41 PM

At a sprint over the weekend, Ian Jackson and I designed and implemented a system to make it possible for Debian Developers to upload new versions of packages by simply pushing a specially formatted git tag to salsa (Debian’s GitLab instance). That’s right: the only thing you will have to do to cause new source and binary packages to flow out to the mirror network is sign and push a git tag.

It works like this:

  1. DD signs and pushes a git tag containing some metadata. The tag is placed on the commit you want to release (which is probably the commit where you ran dch -r).

  2. This triggers a GitLab webhook, which passes the public clone URI of your salsa project and the name of the newly pushed tag to a cloud service called tag2upload.

  3. tag2upload verifies the signature on the tag against the Debian keyring,1 produces a .dsc and .changes, signs these, and uploads the result to ftp-master.2

    (tag2upload does not have, nor need, push access to anyone’s repos on salsa. It doesn’t make commits to the maintainer’s branch.)

  4. ftp-master and the autobuilder network push out the source and binary packages in the usual way.

The first step of this should be as easy as possible, so we’ve produced a new script, git debpush, which just wraps git tag and git push to sign and push the specially formatted git tag.

We’ve fully implemented tag2upload, though it’s not running in the cloud yet. However, you can try out this workflow today by running tag2upload on your laptop, as if in response to a webhook. We did this ourselves for a few real uploads to sid during the sprint.

  1. First get the tools installed. tag2upload reuses code from dgit and dgit-infrastructure, and lives in bin:dgit-infrastructure. git debpush is in a completely independent binary package which does not make any use of dgit.3

    % apt-get install git-debpush dgit-infrastructure dgit debian-keyring

    (you need version 9.1 of the first three of these packages, in Debian testing, unstable and buster-backports at the time of writing).

  2. Prepare a source-only upload of some package that you normally push to salsa. When you are ready to upload this, just type git debpush.

    If the package is non-native, you will need to pass a quilt option to inform tag2upload what git branch layout you are using—it has to know this in order to produce a .dsc. See the git-debpush(1) manpage for the supported quilt options.

    The quilt option you select gets stored in the newly created tag, so for your next upload you won’t need it, and git debpush alone will be enough.

    See the git-debpush(1) manpage for more options, but we’ve tried hard to ensure most users won’t need any.

  3. Now you need to simulate salsa’s sending of a webhook to the tag2upload service. This is how you can do that:

    % mkdir -p ~/tmp/t2u % cd ~/tmp/t2u % DGIT_DRS_EMAIL_NOREPLY=myself@example.org dgit-repos-server \ debian . /usr/share/keyrings/debian-keyring.gpg,a --tag2upload \ https://salsa.debian.org/dgit-team/dgit-test-dummy.git debian/1.23

    … substituting your own service admin e-mail address, salsa repo URI and new tag name.

    Check the file ~/tmp/t2u/overall.log to see what happened, and perhaps take a quick look at Debian’s upload queue.

A few other notes about trying this out:

  • tag2upload will delete various files and directories in your working directory, so be sure to invoke it in an empty directory like ~/tmp/t2u.

  • You won’t see any console output, and the command might feel a bit slow. Neither of these will matter when tag2upload is running as a cloud service, of course. If there is an error, you’ll get an e-mail.

  • Running the script like this on your laptop will use your default PGP key to sign the .dsc and .changes. The real cloud service will have its own PGP key.

  • The shell invocation given above is complicated, but once the cloud service is deployed, no human is going to ever need to type it!

    What’s important to note is the two pieces of user input the command takes: your salsa repo URI, and the new tag name. The GitLab web hook will provide the tag2upload service with (only) these two parameters.

For some more discussion of this new workflow, see the git-debpush(1) manpage. We hope you have fun trying it out.

  1. Unfortunately, DMs can’t try tag2upload out on their laptops, though they will certainly be able to use the final cloud service version of tag2upload.
  2. Only source-only uploads are supported, but this is by design.
  3. Do not be fooled by the string ‘dgit’ appearing in the generated tags! We are just reusing a tag metadata convention that dgit also uses.

Thomas Lange: Talks, articles and a podcast in German

Tuesday 9th of July 2019 02:31:36 PM

In April I gave a talk about FAI a the GUUG-Frühjahrsfachgespräch in Karlsruhe (Slides). At this nice meeting Ingo from RadioTux made an interview with me (https://www.radiotux.de/index.php?/archives/8050-RadioTux-Sendung-April-2019.html) (from 0:35:30).

Then I found an article in the iX Special 2019 magazine about automation in the data center which mentioned FAI. Nice. But I was very supprised and happy when I saw a whole article about FAI in the Linux Magazin 7/2019. A very good article with a some focus on network things, but also the class system and installing other distributions is described. And they will also publish another article about the FAI.me service in a few months. I'm excited!

In a few days, I going to DebConf19 in Curitiba for two weeks. I will work on Debian web stuff, check my other packages (rinse, dracut, tcsh) and hope to meet a lot of friendly people.

And in August I'm giving a talk at FrOSCon about FAI.

FAI

Steve Kemp: Upgraded my first host to buster

Tuesday 9th of July 2019 09:01:00 AM

I upgrade the first of my personal machines to Debian's new stable release, buster, yesterday. So far two minor niggles, but nothing major.

My hosts are controlled, sometimes, by puppet. The puppet-master is running stretch and has puppet 4.8.2 installed. After upgrading my test-host to the new stable I discovered it has puppet 5.5 installed:

root@git ~ # puppet --version 5.5.10

I was not sure if there would be compatibility problems, but after reading the release notes nothing jumped out. Things seemed to work, once I fixed this immediate problem:

# puppet agent --test Warning: Unable to fetch my node definition, but the agent run will continue: Warning: SSL_connect returned=1 errno=0 state=error: dh key too small Info: Retrieving pluginfacts ..

This error-message was repeated multiple times:

SSL_connect returned=1 errno=0 state=error: dh key too small

To fix this comment out the line in /etc/ssl/openssl.cnf which reads:

CipherString = DEFAULT@SECLEVEL=2

The second problem was that I use borg to run backups, once per day on most systems, and twice per day on others. I have an invocation which looks like this:

borg create ${flags} --compression=zlib --stats ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S) \ --exclude=/proc \ --exclude=/swap.file \ --exclude=/sys \ --exclude=/run \ --exclude=/dev \ --exclude=/var/log \ /

That started to fail :

borg: error: unrecognized arguments: /

I fixed this by re-ordering the arguments such that it ended "destination path", and changing --exclude=x to --exclude x:

borg create ${flags} --compression=zlib --stats \ --exclude /proc \ --exclude /swap.file \ --exclude /sys \ --exclude /run \ --exclude /dev \ --exclude /var/log \ ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S) /

That approach works on my old and new hosts.

I'll leave this single system updated for a few more days to see what else is broken, if anything. Then I'll upgrade them in turn.

Good job!

Daniel Kahn Gillmor: DANE OPENPGPKEY for debian.org

Tuesday 9th of July 2019 04:00:00 AM
DANE OPENPGPKEY for debian.org

I recently announced the publication of Web Key Directory for @debian.org e-mail addresses. This blog post announces another way to fetch OpenPGP certificates for @debian.org e-mail addresses, this time using only the DNS. These two mechanisms are complementary, not in competition. We want to make sure that whatever certificate lookup scheme your OpenPGP client supports, you will be able to find the appropriate certificate.

The additional mechanism we're now supporting (since a few days ago) is DANE OPENPGPKEY, specified in RFC 7929.

How does it work?

DANE OPENPGPKEY works by storing a minimized OpenPGP certificate in the DNS, ideally in a subdomain at label based on a hashed version of the local part of the e-mail address.

With modern GnuPG, if you're interested in retrieving the OpenPGP certificate for dkg as served by the DNS, you can do:

gpg --auto-key-locate clear,nodefault,dane --locate-keys dkg@debian.org

If you're interested in how this DNS zone is populated, take a look at can the code that handles it. Please request improvements if you see ways that this could be improved.

Unfortunately, GnuPG does not currently do DNSSEC validation on these records, so the cryptographic protections offered by this client are not as strong as those provided by WKD (which at least checks the X.509 certificate for a given domain name against the list of trusted root CAs).

Why offer both DANE OPENPGPKEY and WKD?

I'm hoping that the Debian project can ensure that no matter whatever sensible mechanism any OpenPGP client implements for certificate lookup, it will be able to find the appropriate OpenPGP certificate for contacting someone within the @debian.org domain.

A clever OpenPGP client might even consider these two mechanisms -- DANE OPENPGPKEY and WKD -- as corroborative mechanisms, since an attacker who happens to compromise one of them may find it more difficult to compromise both simultaneously.

How to update?

If you are a Debian developer and you want your OpenPGP certificate updated in the DNS, please follow the normal procedures for Debian keyring maintenance like you always have. When a new debian-keyring package is released, we will update these DNS records at the same time.

Thanks

Setting this up would not have been possible without help from weasel on the Debian System Administration team, and Noodles from the keyring-maint team providing guidance.

DANE OPENPGPKEY was documented and shepherded through the IETF by Paul Wouters.

Thanks to all of these people for making it possible.

Rodrigo Siqueira: Status Update, June 2019

Tuesday 9th of July 2019 03:00:00 AM

For a long period of time, I’m cultivating the desire of having a habit of writing monthly status updates. Someway, Drew DeVault’s Blog posts and Martin Peres’s advice leverage me towards this direction. So, here I am! I have decided to embrace the challenge of composing a report per month. I hope this new habit helps me to improve my writing and communication skills but most importantly, help me to keep track of my work. I want to start this update by describing my work conditions and then focus on the technical stuff.

In the last two months, I’ve been facing an infrastructure problem to work. I’m dealing with obstacles such as restricted Internet access and long hours in public transportation from my home to my workplace. Unfortunately, I can’t work in my house due to the lack of space, and the best place to work is a public library at the University of Brasilia (UnB). Going to UnB every day makes me waste around 3h per day in a bus. The library has a great environment, but it also has thousands of internet restrictions. The fact that I can’t access websites with ‘.me’ domain and connect to my IRC bouncer is an example of that. In summary: It’s been hard to work these days. So let’s stop talking about non-technical stuff and get into the heart of the matter.

I really like working on VKMS. I know this is not news to anybody, and in June, most of my efforts were dedicated to VKMS. One of my paramount endeavors it was found and fixed a bug in vkms that makes kms_cursor_crc, and kms_pipe_crc_basic fails. I was chasing this bug for a long time as can be seen here [1]. After many hours debugging it, I sent a patch for handling this issue [2], however, after Daniel’s review, I realized that my patch didn’t fix correctly the problem. So Daniel decided to dig into this issue to find the root of the problem and later sent a final fix. If you want to see the solution, take a look at [3]. One day, I want to write a post about this fix since it is an interesting subject to discuss.

Daniel also noticed some concurrency problems in the CRC code and sent a patchset composed of 10 patches that tackle the issue. These patches focused on creating better framebuffers manipulation and avoiding race conditions. It took me around 4 days to take a look and test this series. During my review, I asked many things related to concurrency and other clarification about DRM. Daniel always replied with a very nice and detailed explanation. If you want to learn a little bit more about locks, I recommend you to take a look at [4]. Seriously, it is really nice!

I also worked for adding the writeback support in vkms; since XDC2018 I could not stop to think about the idea of adding writeback connector in vkms due to the benefits it could bring, such as new test and assist developers with visual output. As a result, I started some clumsy attempts to implement it in January; but I really dove in this issue in the middle of April, and in June I was focused on making it work. It was tough for me to implement these features due to the following reasons:

  1. There is not i-g-t test for writeback in the main repository, I had to use a WIP patchset made by Brian and Liviu.
  2. I was not familiar with framebuffer, connectors, and fancy manipulation.

As a result of the above limitations, I had to invest many hours reading the documentation and the DRM/IGT code. In the end, I think that add writeback connectors paid well for me since I feel much more comfortable with many things related to DRM these days. The writeback support was not landed yet, however, at this moment the patch is under review (V3) and changed a lot since the first version; for details about this series take a look at [5]. I’ll write a post about this feature after it gets merged.

After having the writeback connectors working in vkms, I felt so grateful for Brian, Liviu, and Daniel for all the assistance they provided to me. In particular, I was thrilled that Brian and Liviu made kms_writeback test which worked as an implementation guideline for me. As a result, I updated their patchsets for making it work in the latest version of IGT and made some tiny fixes. My goal was helping them to upstream kms_writeback. I submitted the series with the hope to see it landed in the IGT [9].

Parallel to my work with ‘writeback’ I was trying to figure out how I could expose vkms configurations to the userspace via configfs. After many efforts, I submitted the first version of configfs support; in this patchset I exposed the virtual and writeback connectors. Take a look at [6] for more information about this feature, and definitely, I’ll write a post about this feature after it gets landed.

Finally, I’m still trying to upstream a patch that makes drm_wait_vblank_ioctl return EOPNOTSUPP instead of EINVAL if the driver does not support vblank get landed. Since this change is in the DRM core and also change the userspace, it is not easy to make this patch get landed. For the details about this patch, you can take a look here [7]. I also implemented some changes in the kms_flip to validate the changes that I made in the function drm_wait_vblank_ioctl and it got landed [8].

July Aims

In June, I was totally dedicated to vkms, now I want to slow my road a little bit and study more about userspace. I want to take a step back and make some tiny programs using libdrm with the goal of understanding the interaction among userspace and kernel space. I also want to take a look at the theoretical part related to computer graphics.

I want to put some effort to improve a tool named kw that help me during my work with Linux Kernel. I also want to take a look at real overlay planes support in vkms. I noted that I have to find a “contribution protocol” (review/write code) that works for me in my current work conditions; otherwise, work will become painful for my relatives and me. Finally, and most importantly, I want to take some days off to enjoy my family.

Info: If you find any problem with this text, please let me know. I will be glad to fix it.

References

[1] “First discussion in the Shayenne’s patch about the CRC problem”. URL: https://lkml.org/lkml/2019/3/10/197

[2] “Patch fix for the CRC issue”. URL: https://patchwork.freedesktop.org/patch/308617/

[3] “Daniel final fix for CRC”. URL: https://patchwork.freedesktop.org/patch/308881/?series=61703&rev=1

[4] “Rework crc worker”. URL: https://patchwork.freedesktop.org/series/61737/

[5] “Introduces writeback support”. URL: https://patchwork.freedesktop.org/series/61738/

[6] “Introduce basic support for configfs”. URL: https://patchwork.freedesktop.org/series/63010/

[7] “Change EINVAL by EOPNOTSUPP when vblank is not supported”. URL: https://patchwork.freedesktop.org/patch/314399/?series=50697&rev=7

[8] “Skip VBlank tests in modules without VBlank”. URL: https://gitlab.freedesktop.org/drm/igt-gpu-tools/commit/2d244aed69165753f3adbbd6468db073dc1acf9a

[9] “Add support for testing writeback connectors”. URL: https://patchwork.freedesktop.org/series/39229/

Jonathan Wiltshire: Too close?

Monday 8th of July 2019 08:39:06 PM

At times of stress I’m prone to topical nightmares, but they are usually fairly mundane – last night, for example, I dreamed that I’d mixed up bullseye and bookworm in one of the announcements of future code names.

But Saturday night was a whole different game. Imagine taking a rucksack out of the cupboard under the stairs, and thinking it a bit too heavy for an empty bag. You open the top and it’s full of small packages tied up with brown paper and string. As you take each one out and set it aside you realise, with mounting horror, that these are all packages missing from buster and which should have been in the release. But it’s too late to do anything about that now; you know the press release went out already because you signed it off yourself, so you can’t do anything else but get all the packages out of the bag and see how many were forgotten. And you dig, and count, and dig, and it’s like Mary Poppins’ carpet bag, and still they keep on coming…

Sometimes I wonder if I get too close to stuff!

Andy Simpkins: Buster Release Party – Cambridge, UK

Monday 8th of July 2019 02:21:17 PM

With the release of Debian GNU/Linux 10.0.0 “Buster” completing in the small hours of yesterday morning (0200hrs UTC or thereabouts) most of the ‘release parties’ had already been and gone…. Not so for the Cambridge contingent who had scheduled a get together for the Sunday [0], knowing that various attendees would have been working on the release until the end.

The Sunday afternoon saw a gathering in the Haymakers pub to celebrate a successful release.   We would publicly like to thank the Raspberry Pi foundation [1], and Mythic Beasts [2] who between them picked up the tab for our bar bill – Cheers and thank you!

 

 

 

 

Ian and Sean also managed to join us, taking time out from the dgit sprint that had been running since Friday [3].

For most of the afternoon we had the inside of the pub all to ourselves.   

This was a friendly and low key event as we decompressed from the previous day’s activities, but we were also joined many other DDs, users and supporters, mostly hanging out in the garden but I confess to staying most of the time indoors in the shade and with a breeze through the pub…

Laptops are still needed even at a party

The start of the next project…

 

[0]    http://www.chiark.greenend.org.uk/pipermail/debian-uk/2019-June/000481.html
       https://wiki.debian.org/ReleasePartyBuster/UK/Cambridge

[1]   https://www.raspberrypi.org/

[2]    https://www.mythic-beasts.com/

[3]    https://wiki.debian.org/Sprints/2019/DgitDebrebase

Matthew Garrett: Creating hardware where no hardware exists

Sunday 7th of July 2019 07:46:01 PM
The laptop industry was still in its infancy back in 1990, but it still faced a core problem that we do today - power and thermal management are hard, but also critical to a good user experience (and potentially to the lifespan of the hardware). This is in the days where DOS and Windows had no memory protection, so handling these problems at the OS level would have been an invitation for someone to overwrite your management code and potentially kill your laptop. The safe option was pushing all of this out to an external management controller of some sort, but vendors in the 90s were the same as vendors now and would do basically anything to avoid having to drop an extra chip on the board. Thankfully(?), Intel had a solution.

The 386SL was released in October 1990 as a low-powered mobile-optimised version of the 386. Critically, it included a feature that let vendors ensure that their power management code could run without OS interference. A small window of RAM was hidden behind the VGA memory[1] and the CPU configured so that various events would cause the CPU to stop executing the OS and jump to this protected region. It could then do whatever power or thermal management tasks were necessary and return control to the OS, which would be none the wiser. Intel called this System Management Mode, and we've never really recovered.

Step forward to the late 90s. USB is now a thing, but even the operating systems that support USB usually don't in their installers (and plenty of operating systems still didn't have USB drivers). The industry needed a transition path, and System Management Mode was there for them. By configuring the chipset to generate a System Management Interrupt (or SMI) whenever the OS tried to access the PS/2 keyboard controller, the CPU could then trap into some SMM code that knew how to talk to USB, figure out what was going on with the USB keyboard, fake up the results and pass them back to the OS. As far as the OS was concerned, it was talking to a normal keyboard controller - but in reality, the "hardware" it was talking to was entirely implemented in software on the CPU.

Since then we've seen even more stuff get crammed into SMM, which is annoying because in general it's much harder for an OS to do interesting things with hardware if the CPU occasionally stops in order to run invisible code to touch hardware resources you were planning on using, and that's even ignoring the fact that operating systems in general don't really appreciate the entire world stopping and then restarting some time later without any notification. So, overall, SMM is a pain for OS vendors.

Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot[2]. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.

What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space[3]. After some fighting with Intel documentation[4] I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible[5]. We've created hardware where none existed.

The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time.

[1] If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
[2] It's actually more complicated than that - see here for more.
[3] IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it.
[4] Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
[5] Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough

comments

Debian GSoC Kotlin project blog: Week 4 & 5 Update

Sunday 7th of July 2019 11:51:23 AM
Finished downgrading the project to be buildable by gradle 4.4.1

I have finished downgrading the project to be buildable using gradle 4.4.1. The project still needed a part of gradle 4.8 that I have successfully patched into sid gradle. here is the link to the changes that I have made.

Now we are officially done with making the project build with our gradle; so we can now go on ahead and finally start mapping out and packaging the dependencies.

Packaging dependencies for Kotlin-1.3.30

I split this task into two sub tasks that can be done independently. The 2 subtasks are as follows:
->part 1: make the entire project build successfully without :buildSrc:prepare-deps:intellij-sdk:build
--->part1.1:package these dependencies
->part 2: package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build ; i.e try to recreate whatever is in it.

The task has been split into this exact model because this folder has a variety of jars that the project uses and we ll have to minimize it and package the needed jars from it. Also the project uses other plugins and jars other than this one main folder which we can simultaneously map and package out.

I now have successfully mapped out the dependencies need by part 1: all that remains is to package em. I have copied the dependencies from the original cache(one created when I build the project using ./gradlew -Pteamcity=true dist) to /usr/share/maven-repo so some of these dependencies still need to have their dependencies clearly defined as in which of their dependencies we can omit and which we need. I have marked such dependencies with a *. So here are the dependencies:

jengeleman:shadow:4.0.3 --> https://github.com/johnrengelman/shadow (DONE: https://salsa.debian.org/m36-guest/jengelman-shadow) trove4j 1.x -> https://github.com/JetBrains/intellij-deps-trove4j (DONE: https://salsa.debian.org/java-team/libtrove-intellij-java) proguard:6.0.3 in jdk8 (DONE: released as libproguard-java 6.0.3-2) io.javaslang:2.0.6 --> https://github.com/vavr-io/vavr/tree/javaslang-v2.0.6 (DONE:https://salsa.debian.org/m36-guest/javaslang) jline 3.0.3 --> https://github.com/jline/jline3/tree/jline-3.3.1 protobuf-2.6.1 in jdk8 (DONE: https://salsa.debian.org/java-team/protobuf-2) *com.jcabi:jcabi-aether:1.0 *org.sonatype.aether:aether-api:1.13.1

!!NOTE:Please note that I might have missed out some; I ll add em to the list once I get em mapped out proper!!

So if any of you kind souls wanna help me out please kindly take on any of these and pacakge em.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

Here is a link to the work I have done so far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates weekly.

Eriberto Mota: Debian: repository changed its ‘Suite’ value from ‘testing’ to ‘stable’

Sunday 7th of July 2019 03:44:52 AM

Debian 10 (Buster) was released two hours ago. \o/

When using ‘apt-get update’, I can see the message:

E: Repository 'http://security.debian.org/debian-security buster/updates InRelease' changed its 'Suite' value from 'testing' to 'stable' N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.

 

Solution:

# apt-get --allow-releaseinfo-change update

 

Enjoy!

Bits from Debian: Debian 10 "buster" has been released!

Sunday 7th of July 2019 01:25:00 AM

You've always dreamt of a faithful pet? He is here, and his name is Buster! We're happy to announce the release of Debian 10, codenamed buster.

Want to install it? Choose your favourite installation media and read the installation manual. You can also use an official cloud image directly on your cloud provider, or try Debian prior to installing it using our "live" images.

Already a happy Debian user and you only want to upgrade? You can easily upgrade from your current Debian 9 "stretch" installation; please read the release notes.

Do you want to celebrate the release? We provide some buster artwork that you can share or use as base for your own creations. Follow the conversation about buster in social media via the #ReleasingDebianBuster and #Debian10Buster hashtags or join an in-person or online Release Party!

Andy Simpkins: Debian Buster Release

Sunday 7th of July 2019 12:30:49 AM

I spent all day smoke testing the install images for yesterday’s (this mornings – gee just after midnight local so we still had 11 hours to spare) Debian GNU/Linux 10.0.0 “Buster” release.

This year we had our “Biglyest test matrix ever”[0], 111 tests were completed at the point the release images were signed and the ftp team pushed the button.  Although more tests were reported to have taken place in IRC we had a total of 9 people completing tests called for in the wiki test matrix.

We also had a large number of people in irc during the image test phase of the release – peeking at 117…

Steve kindly hosted his front room a few of us – using a local mirror[1] of the images so our VM tests have the image available as an NFS share really speeds things up!  Between the 4 of us here in Cambridge we were testing on a total of 14 different machines, mainly AMD64, a couple of “only i386 capable laptops” and 3 entirely different ARM64 machines! (Mustang, Synquacer, & MacchiatoBin).

Room heaters on a hot day – photo RandomBird

In the photo are Sledge, Codehelp, Isy and myself…[2]

A Special thanks should also be given to Schweer who spent time testing the Debian Edu images and to Zdykstra for testing on ppc64el [2].

 

Finally sledge posted a link of the bandwidth utilization on the distribution network servers last week it looked like this:

Debian primary distribution network last week

That is seeing 500MBytes/s daily peek in traffic.

Pushing Buster today the graph looks like:

Guess when Buster release went live?

1.5GByte/Second – that is some network load :-)

 

Anyway – Time to head home.  I have a release party to attend later this afternoon and would like *some* sleep first.

 

 

[0] https://wiki.debian.org/Teams/DebianCD/ReleaseTesting/Buster_r0

[1] Local mirror is another ARM Synquacer ‘Server’ – 24 core Cortex A53  (A53 series is targeted at Mobile Devices)

[2] irc nicks have been used to protect the identity of the guilty :-)

François Marier: SIP Encryption on VoIP.ms

Saturday 6th of July 2019 11:00:00 PM

My VoIP provider recently added support for TLS/SRTP-based call encryption. Here's what I did to enable this feature on my Asterisk server.

First of all, I changed the registration line in /etc/asterisk/sip.conf to use the "tls" scheme:

[general] register => tls://mydid:mypassword@servername.voip.ms

then I enabled incoming TCP connections:

tcpenable=yes

and TLS:

tlsenable=yes tlscapath=/etc/ssl/certs/

Finally, I changed my provider entry in the same file to:

[voipms] type=friend host=servername.voip.ms secret=mypassword username=mydid context=from-voipms allow=ulaw allow=g729 insecure=port,invite transport=tls encryption=yes

(Note the last two lines.)

The dialplan didn't change and so I still have the following in /etc/asterisk/extensions.conf:

[pstn-voipms] exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>) exten => _1NXXNXXXXXX,n,Dial(SIP/voipms/${EXTEN}) exten => _1NXXNXXXXXX,n,Hangup() exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>) exten => _NXXNXXXXXX,n,Dial(SIP/voipms/1${EXTEN}) exten => _NXXNXXXXXX,n,Hangup() exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551234567>) exten => _011X.,n,Authenticate(1234) ; require password for international calls exten => _011X.,n,Dial(SIP/voipms/${EXTEN}) exten => _011X.,n,Hangup(16) Server certificate

The only thing I still need to fix is to make this error message go away in my logs:

asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>

It appears to be related to the fact that I didn't set tlscertfile in /etc/asterisk/sip.conf and that it's using its default value of asterisk.pem, a non-existent file.

Since my Asterisk server is only acting as a TLS client, and not a TLS server, there's probably no harm in not having a certificate. That said, it looks pretty easy to use a Let's Encrypt cert with Asterisk.

Jonathan Wiltshire: Daisy and George Help Debian

Saturday 6th of July 2019 07:33:53 PM

Daisy and George have decided to get stuck into testing images for #ReleasingDebianBuster.

George is driving the keyboard while Daisy takes notes about their test results.

This test looks like a success. Next!

Jonathan Wiltshire: Testing in Teams

Saturday 6th of July 2019 03:46:44 PM

The Debian CD images are subjected to a battery of tests before release, even more so when the release is a major new version. Debian has volunteers all over the world testing images as they come off the production line, but it can be a lonely task.

Getting together in a group and having a bit of competitive fun always makes the day go faster:

Debian CD testers in Cambridge, GB (photo: Jo McIntyre)

And, of course, they’re a valuable introduction to the life-cycle of a Debian release for future Debian Developers.

Niels Thykier: A decline in the use of hints in the release team

Saturday 6th of July 2019 10:16:11 AM

While we working the release, I had a look at how many hints we have deployed during buster like I did for wheezy a few years back.  As it seemed we were using a lot fewer hints than previously and I decided to take a detour into “stats-land”.

More in Tux Machines

Late Coverage of Confidential Computing Consortium

  • Microsoft Partners With Google, Intel, And Others To Form Data Protection Consortium

    The software maker joined Google Cloud, Intel, IBM, Alibaba, Arm, Baidu, Red Hat, Swisscom, and Tencent to establish the Confidential Computing Consortium, a group committed to providing better private data protection, promoting the use of confidential computing, and advancing open source standards among members of the technology community.

  • #OSSUMMIT: Confidential Computing Consortium Takes Shape to Enable Secure Collaboration

    At the Open Source Summit in San Diego, California on August 21, the Linux Foundation announced the formation of the Confidential Computing Consortium. Confidential computing is an approach using encrypted data that enables organizations to share and collaborate, while still maintaining privacy. Among the initial backers of the effort are Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. “The context of confidential computing is that we can actually use the data encrypted while programs are working on it,” John Gossman, distinguished engineer at Microsoft, said during a keynote presentation announcing the new effort. Initially there are three projects that are part of the Confidential Computing Consortium, with an expectation that more will be added over time. Microsoft has contributed its Open Enclave SDK, Red Hat is contributing the Enarx project for Trusted Execution Environments and Intel is contributing its Software Guard Extensions (SGX) software development kit. Lorie Wigle, general manager, platform security product management at Intel, explained that Intel has had a capability built into some of its processors called software guard which essentially provides a hardware-based capability for protecting an area of memory.

Graphics: Mesa Radeon Vulkan Driver and SPIR-V Support For OpenGL 4.6

  • Mesa Radeon Vulkan Driver Sees ~30% Performance Boost For APUs

    Mesa's RADV Radeon Vulkan driver just saw a big performance optimization land to benefit APUs like Raven Ridge and Picasso, simply systems with no dedicated video memory. The change by Feral's Alex Smith puts the uncached GTT type at a higher index than the visible vRAM type for these configurations without dedicated vRAM, namely APUs.

  • Intel Iris Gallium3D Is Close With SPIR-V Support For OpenGL 4.6

    This week saw OpenGL 4.6 support finally merged for Intel's i965 Mesa driver and will be part of the upcoming Mesa 19.2 release. Not landed yet but coming soon is the newer Intel "Iris" Gallium3D driver also seeing OpenGL 4.6 support. Iris Gallium3D has been at OpenGL 4.5 support and is quite near as well with its OpenGL 4.6 support thanks to the shared NIR support and more with the rest of the Intel open-source graphics stack. Though it's looking less likely that OpenGL 4.6 support would be back-ported to Mesa 19.2 for Iris, but we'll see.

The GPD MicroPC in 3 Minutes [Video Review]

In it I tackle the GPD MicroPC with Ubuntu MATE 19.10. I touch on the same points made in my full text review, but with the added bonus of moving images to illustrate my points, rather than words. Read more Also: WiringPi - Deprecated

today's howtos