Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content Planet KDE
Planet KDE
Updated: 12 hours 15 min ago

Qactus v2.0.0 is out!

Monday 20th of April 2020 11:21:06 AM

Here it is! Qactus v2.0.0 has been released

Selecting a Software Component Through Community Analytics

Monday 20th of April 2020 11:00:00 AM

Note: The following blog post has been previously published in french on the enioka blog.


Within enioka Haute Couture, we often help with projects technical framing. One activity is to select the building blocks which will be used as a base for the system developed for the customer. This technical foundation is essential for the durability and maintainability of the future system. Most developers base these choices on pure technical merits. While this is an important aspect, the long-term viability of a component is not only determined by the quality of its code and design, the social dynamics of the team or community in charge of the component under consideration are also crucial.

In this article, we look back at a work done in the summer of 2019 on behalf of a client. We had to frame the entire project, but here we focus on one of the choices made for the GUI part. In particular, we had to select a React component library. After an examination of their technical merits in terms of structure and API, we still had two serious candidates: ant-design and Semantic UI. We then had to compare them in terms of community dynamics. It was a perfect situation to use ComDaAn in order to get an opinion on the health of these two communities. We have focused on developer activity in the git repositories.

Clean up the data

In order to be able to start the study, we clone the repositories of the concerned projects in the ~/Repositories/React directory. With ComDaAn we get one data frame per repository representing its complete history. This is done in a few lines of python.

import comdaan as cd ant_data = cd.parse_repositories("~/Repositories/React/ant-design") sem_data = cd.parse_repositories("~/Repositories/React/Semantic-UI-React")

After exploring the data, either by examining the data frames or by using the visualization presented in the next section, we can see that they are very noisy. Indeed, two problems arise. Firstly, in these projects the authors’ names are not constant, this is particularly noticeable in the case of ant-design with a population using representations of their names sometimes in ASCII but not always, sometimes using a Westernized alternative name, sometimes a phonetic approximation… Second, we can see the use of a robot generating commits on Semantic UI which gives a wrong picture of the activity.

Thus we decided to generate an identifier for each author based on their email address and remove the commits of the robot. To do so, we created a comdaan_ruleset.py file that we placed in ~/Repositories/React. This file will be automatically discovered by ComDaAn and applied to all repositories in ~/Repositories/React.

# Use the user name part of emails, or # if user name is "me" use the domain name without extension def email_to_user(s): parts = s.split("@") if s.startswith("me@"): return parts[1].split(".")[0] else: return parts[0] def is_entry_acceptable(entry): if "author_email" not in entry: return False # No need to measure bots activity if entry["author_name"] == "deweybot": return False return True def postprocess_entry(entry): entry["author_name"] = email_to_user(entry["author_email"])

We can see in the above code how we standardized author names based on their email address and we can also see the rejection of the commits generated by deweybot. By running our first script again without modifying it, the commit history is automatically cleaned up. We can now get to the code topic and analyze project activity.

Assessing the Project Activity

First we need to produce a time based visualization for each contributor activity, this is done by adding a few lines to our script.

# Activity, all time, for both projects a = cd.activity(ant_data, "id", "author_name", "date") cd.display(a, output="ant-activity.html", title="Ant Design Activity All Time") a = cd.activity(sem_data, "id", "author_name", "date") cd.display(a, output="semantic-activity.html", title="Semantic UI Activity All Time")

This visualization is lovingly nicknamed “colorful blobs” we will see why immediately.


ant-design Activity


Semantic UI Activity

On the abscissa axis we have the time and on the ordinate axis each contributor is sorted on the date of his first commit. At each time interval (here the week) a “blob” is added if the contributor was active during the period. In case of of activity the color is moderate according to the level of activity the more “intense” the color is the more active the contributor was.

This allows us to see several important information at a glance:

  1. which are the most active contributors, since the lines frequently containing brightly colored blobs go stand out from the rest;
  2. the recruiting capacity of the project, as the contributors are sorted by date of entry the left envelope of our blobs forms a curve, the steeper the slope, the faster the project recruits;
  3. the retention capacity of the project, within the left envelope, the denser the surface is in blobs the more contributors linger on the project.

In the case of our comparison this already gives us some insights. Indeed we see that ant-design has a recruitment rate almost constant since 2017 (this rate was lower before 2017 as the inflection of the curve indicates). In the same vein, we can see that the recruitment dynamics on Semantic UI has been eroding since about mid-2018. We can also see a better retention rate for ant-design. Indeed, the density of blobs is higher and among the newly recruited contributors some are quickly becoming very productive.

These are very good signs in favour of ant-design. The difference is so pronounced that at this stage it would be tempting to wrap this up immediately. However, we advise a more comprehensive assessment to avoid some bad surprises.

Assessing the Community Size

Another interesting visualization is the evolution of the size of the community over time. Once again we need to add a few lines to our main script.

# Team Size, all time, for both projects s = cd.teamsize(ant_data, "id", "author_name", "date") cd.display(s, output="ant-teamsize.html", title="Ant Design Team Size All Time") s = cd.teamsize(sem_data, "id", "author_name", "date") cd.display(s, output="semantic-teamsize.html", title="Semantic UI Size All Time")

We thus produce two curves for each project.


ant-design Team Size


Semantic UI Team Size

The blue curve represents the trend in the number of commits per week while the orange curve represents the trend on the weekly number of unique project participants. By comparing the two projects we confirm a bit more what we found during the activity analysis. The ant-design project, through its recruitment and retention rates, is seeing its number of participants per week gradually increase. On the other hand, on the Semantic UI side we see a slow erosion of the team and the overall commit rate. These curves are also valuable in order to identify milestones.

This is not very obvious in the case of ant-design but we can see a stagnation in the number of people working simultaneously over the year 2017. If we look again at the activity analysis for the project, we can indeed see that a recruitment of very active people in 2016 and 2018, but relatively few in 2017. We can therefore assume a reconfiguration of the community in 2017 and therefore it would be interesting to see these changes by further exploring the period 2016 and the period 2018.

In the case of Semantic UI, it seems more interesting to focus on 2016, a year of strong growth in the team size of Semantic UI, and also on 2019 to get an idea of the new structure once the erosion phase has started.

Evaluating the Contributor Network

In order to carry out the explorations mentioned above, we need an opportunity to gauge the structure of the community. We then use the contributor network analysis, which seeks to assess the collaborations between contributors. We base this on artifacts that have been touched by the same people over a given period of time. Obviously we assume that in order to product their patches they had to synchronize and communicate. This is of course an approximation but which tends to work pretty well in practice.

ant-design

Let’s focus first on ant-design in 2016 and 2018. For this we add two analyses to our main script.

# Network for ant design in 2016 ant_data_2016 = ant_data[(ant_data["date"] >= '2016-01-01') & (ant_data["date"] <= "2016-12-31")].copy() n = cd.network(ant_data_2016, "author_name", "files") cd.display(n, output="ant-network-2016.html", title="Ant Design Contributor Network 2016") # Network for ant design in 2018 ant_data_2018 = ant_data[(ant_data["date"] >= '2018-01-01') & (ant_data["date"] <= "2018-12-31")].copy() n = cd.network(ant_data_2018, "author_name", "files") cd.display(n, output="ant-network-2018.html", title="Ant Design Contributor Network 2018")

We then get two contributor networks. Each node represents a contributor, it is linked to the contributors it collaborated with (assuming the approximation cited above), the stronger the collaboration the stronger the bond. Based on these links we can then assess whether a contributor is very central or not. The centrality used in our analysis is the fraction of the number of contributors to which a given contributor is connected. The more central it is, the more its color will be “intense.”


ant-design Contributor Network in 2016

For the year 2016, we find a network that already seems visually quite dense (indeed, we can see a large number of nodes and many connections between these nodes). Moreover, we can very quickly identify the two most central contributors of the network: “afc163” and “benjytrys”. It is then possible to extrapolate and claim that these two contributors are probably de facto maintainers of the project. Finally, around these two contributors, we can identify five other very central nodes. We are therefore in the presence of a project with strong leadership and a team of maintainers of a respectable size (around seven people).


ant-design Contributor Network in 2018

As we suspected, over the year 2018 we are seeing a reconfiguration of the community (probably having taken place during 2017). First of all, the network is even denser, which is an excellent sign. We had already seen a strong recruitment but obviously this has not been at the expense of community cohesion. Moreover, one of the two central contributors to the network has changed, indeed “benjytrys” is no longer central and has been replaced by “smith3816”. Looking at the other central contributors, only one remained in common. We can therefore say that the maintenance team was almost entirely renewed during 2017 and yet the overall dynamics of the project have not been impacted.

Semantic UI

Now we can explore Semantic UI over the years 2016 and 2019. Again we add two analyses to our main script.

# Network for Semantic UI in 2016 sem_data_2016 = sem_data[(sem_data["date"] >= '2016-01-01') & (sem_data["date"] <= "2016-12-31")].copy() n = cd.network(sem_data_2016, "author_name", "files") cd.display(n, output="semantic-network-2016.html", title="Semantic UI Contributor Network 2016") # Network for Semantic UI in 2019 sem_data_2019 = sem_data[(sem_data["date"] >= '2019-01-01') & (sem_data["date"] <= "2019-12-31")].copy() n = cd.network(sem_data_2019, "author_name", "files") cd.display(n, output="semantic-network-2019.html", title="Semantic UI Contributor Network 2019")

Once again, we get two contributor networks.


Semantic UI Contributor Network in 2016

For the year 2016 we see a less dense network than the ant-design network. But apart from that, the situation seems similar: two very central people (“levithomason” and “jeff.carbonella”) followed by about four other central contributors (including a certain “alexander.mcgarret”). We are in the presence of a sensible maintenance team.


Semantic UI Contributor Network in 2019

As expected with the general erosion dynamics, the network of contributors is less dense in 2019. Furthermore the maintenance team shrank around two or three people. There is not really a co-maintainer situation anymore. The probable maintainer now is “alexander.mcgarret”.

Conclusion

We have reached the end of our project explorations. We have completed three analyses (activity, team size, and contributor network) which gave us a great deal of information on the evaluated projects:

  • community recruitment and retention rates;
  • most active contributors;
  • trends in overall activity and team size;
  • the size and density of the network of contributors;
  • the most central contributors likely forming the maintenance teams.

This is interesting project information to get an idea of their history and struggles in terms of community. However, in the context of a comparison in order to select a component, it seems obvious that the most relevant choice seems to be ant-design. Indeed, the latter shows a general dynamic of contribution rather stable and has a more than honorable maintenance team. In addition, the community has managed to survive an almost complete renewal of its maintenance team while maintaining its overall recruitment and retention rates. We have a community that seems resilient.

Of course, all this analysis is based solely on commits and considering the contributors as individuals. It might be interesting to check the affiliations of the various contributors… information which is unfortunately unavailable here.

Interestingly, one year after our initial analysis, we do not regret our choice at all. Indeed, the dynamics identified at the time were confirmed and the ant-design is then striving on the technical front with the release of a major new version last month.

Appendix: Full Script #! /usr/bin/env python import comdaan as cd ant_data = cd.parse_repositories("~/Repositories/React/ant-design") sem_data = cd.parse_repositories("~/Repositories/React/Semantic-UI-React") # Activity, all time, for both projects a = cd.activity(ant_data, "id", "author_name", "date") cd.display(a, output="ant-activity.html", title="Ant Design Activity All Time") a = cd.activity(sem_data, "id", "author_name", "date") cd.display(a, output="semantic-activity.html", title="Semantic UI Activity All Time") # Team Size, all time, for both projects s = cd.teamsize(ant_data, "id", "author_name", "date") cd.display(s, output="ant-teamsize.html", title="Ant Design Team Size All Time") s = cd.teamsize(sem_data, "id", "author_name", "date") cd.display(s, output="semantic-teamsize.html", title="Semantic UI Size All Time") # Network for ant design in 2016 ant_data_2016 = ant_data[(ant_data["date"] >= '2016-01-01') & (ant_data["date"] <= "2016-12-31")].copy() n = cd.network(ant_data_2016, "author_name", "files") cd.display(n, output="ant-network-2016.html", title="Ant Design Contributor Network 2016") # Network for ant design in 2018 ant_data_2018 = ant_data[(ant_data["date"] >= '2018-01-01') & (ant_data["date"] <= "2018-12-31")].copy() n = cd.network(ant_data_2018, "author_name", "files") cd.display(n, output="ant-network-2018.html", title="Ant Design Contributor Network 2018") # Network for Semantic UI in 2016 sem_data_2016 = sem_data[(sem_data["date"] >= '2016-01-01') & (sem_data["date"] <= "2016-12-31")].copy() n = cd.network(sem_data_2016, "author_name", "files") cd.display(n, output="semantic-network-2016.html", title="Semantic UI Contributor Network 2016") # Network for Semantic UI in 2019 sem_data_2019 = sem_data[(sem_data["date"] >= '2019-01-01') & (sem_data["date"] <= "2019-12-31")].copy() n = cd.network(sem_data_2019, "author_name", "files") cd.display(n, output="semantic-network-2019.html", title="Semantic UI Contributor Network 2019")

The Cost of no Architecture

Sunday 19th of April 2020 04:16:34 PM

Like many others, I enjoy various reverse engineering and tear-down stories. Personally, I mean things like iFixit tear-downs and Ken Shirriff’s blog, so I started following this tweet thread by foone.

I'm trying to get this Logitech Harmony 900 remote working, but it keeps telling me the battery level is extremely low… pic.twitter.com/v3CRE2BUD9

— foone (@Foone) April 18, 2020

This continues with another tweet sequence about getting software running on the remote control. Having enjoyed these tweets, I started thinking.

The Harmony remotes are quite expensive in my mind. I can’t find any exact numbers for the number of sold devices, but I found this 2018 Q4 earnings report. Looking at the net sales, I guess the remotes are either “Tablets & Other Accessories” or “Smart Home”. They represent sales net sales of ~107 and ~89 MUSD over 12 months. Let’s pick the lower number and just look at magnitudes. The Harmony 900 seems to have retailed for ~350 USD back when it was new. So, if all the Smart Home stuff was harmonies, we’re looking at 250k units over a year. So I’m guessing the magnitude is around 10k – 100k units annually – but the Harmony 900 is from 2013, so I assume that it sold closer to the lower number, if not below. The market was new and so on.

Then we look at the tweets again. What have we got? Let’s put aside security issues, unencrypted communications, and other clear mistakes and just look at how the device is built.

Flash to drive the UI, double web servers on-board, Lua, QNX and what not. A 233 MHz CPU and ~64MB of FLASH – for a remote control. From an engineering perspective, this sounds like a fun system to work on – from an architecture perspective, it looks like a ball of mud.

Back in 2013, QNX might have been a good choice compared to Linux. Today, with Yocto and similar tools for developing embedded Linux systems, it feels like an odd choice to add a license cost to such a device. But no biggie. Back in the day this was not an unreasonable choice (and still isn’t for certain applications).

The Flash stuff. There were alternatives back in 2013, but sure, there were plenty of developers at hand and things like Qt QML was still probably a bit clunky (I can’t recall the state of it back then – it required OpenGL ES, which I guess was a big ask back then).

But the mix of techniques and tools. The on-board web servers. The complexity of a small system and the costs it brings to maintenance and testability. If this is the foundation for Harmony remotes and a platform that has been used for the better past of the past decade, I wonder if the added engineering costs for architecture the platform to be more optimized early on would not have paid off in lower maintenance costs, as well as lower hardware costs.

I know how it is when you’re in a project. The deadline is there in big writing on one of the walls. You can get something working by stringing what you have together with duktape and glue. The question I’m asking myself is more along the lines of how do we run embedded systems engineering projects? Where did we go wrong? Why don’t we prioritize the thinking and refactoring over the just-get-this-thing-out-of-the-door?

The answer is time to market and such, but over a decade of building on a ball of mud, the economical numbers start adding up in favour for the better engineered product. For continuous improvement. For spending time thinking about how to improve the system as a whole.

Maui Weekly Report 3

Saturday 18th of April 2020 09:44:08 PM

This week’s progress update brings UI and UX improvements to the MauiKit toolkit and additional features to the apps, pursuing our goals of a convergent environment.

Are you a developer and want to start developing cross-platform and convergent apps, targeting, among other things, the upcoming Linux Mobile devices? Then join us on Telegram: https://t.me/mauiproject.

If you are interested in testing this project and helping out with translations or documentation, you are also more than welcome, ping us, and we can get started.

The Maui Project is free software from the KDE Community developed by the Nitrux team.
This post contains some code snippets to give you an idea of how to use MauiKit. For more detailed documentation, get in touch with us or subscribe to the news feed to keep up to date with the upcoming tutorial.

We are present on Twitter and Mastodon:
https://twitter.com/maui_project
https://mastodon.technology/@mauiproject

MauiKit

MauiKit provides extra components following the Maui Human Interface Guidelines.

ToolBar

The ToolBar, when scrollable, does no longer display a scrollbar, instead it just uses an indicator, since the mouse wheel or touchpad gesture can be used for scrolling instead.

The middle content can now be forced always to be centered on the full width, with forceCenterMiddleContent property.

Dialog

The popup dialog close button now follows the system settings on Linux, and for the other platforms, it follows the window controls default side.

The side of the button is determined on Linux by picking the KWin config settings.

Stricter close policy on the dialogs, resulting in only closing the popup dialog on explicit actions like hitting close icon or clicking  Esc key.

The Dialog component now styles the default buttons more consistently:

TabButton

The TabButton used in the TabBar has a default close button; this button also follows the right side depending on the platform.

Editor & Document Handler

The Editor control and DocumentHandler are now more stable.

A bug causing a crash when changing the highlighting syntax theme has been fixed.

Page

The Page component has a header and a footer; the header can be moved to the bottom on phone devices for reachability reasons, this has now become its property: altHeader and adds up to the new header and footer properties: floatingHeader, floatingFooter, autoHideHeader, headerPositioning, and footerPositioning.

The floating header/footer blurs the content behind it that is overlapping.

Maui.ApplicationWindow { id: root height: 800 width: height Maui.App.enableCSD: true autoHideHeader: true floatingHeader: true headBar.leftContent: ToolButton { icon.name: "love" } footBar.leftContent: ToolButton { icon.name: "love" onClicked: root.altHeader = !root.altHeader } flickable: list ListView { id: list anchors.fill: parent model: 20 spacing: 20 delegate: Rectangle { width: 100 height: 68 color: "orange" } } }

 

https://nxos.org/wp-content/uploads/2020/04/Peek-2020-04-18-09-55.webm ToolActions

ToolActions groups related actions together; this update allows the actions to display text by using property hints such as ToolButton.TextBesideIcon, ToolButton.TextOnly and ToolButton.IconOnly

Maui Apps

Some apps see some small papercut fixes provided from MauiKit, and small features were added.

Index

The file manager now has the option to use a translucent sidebar; this was also added to Nota and Vvave. It can be disabled if your compositor settings do not enable the blur effect.

More options were added to the Settings dialog: Stick sidebar, to whether collapse the places sidebar into a thin bar or to hide it completely, and finally an icon size option.

Nota

The simple text editor now includes more configurable options, such as new focus mode, switching on and off the syntax highlighting, and editor background color options.

Nota also now has a translucent sidebar.

The floating button to create new documents does not longer overlap the editor toolbars, and on small screens, the editor takes the full-screen height when editing.

The new focus mode puts the content first without any other UI distractions.

Station

The terminal emulator now has a settings dialog to configure different terminal style options.

 

Pix

Pix now works for macOS, it still needs platform integration, but that should come from MauiKit side eventually, and benefit all the other apps too.

Added the option to change the thumbnails previews:

In Progress CSD

In GNU/Linux MauiKit looks into KWin config to determine the right control’s theme and buttons distribution, for this to work with 3rdparty themes it will be necessary to create a copy of the wanted theme and place it on the Maui config folder, the structure is pretty simple: if the theme is name BreezeSierra, then a BreezeSierra folder must exist in ~/.local/share/maui/csd with a config.conf a file describing the button assets and other properties like border-radius and wether request to mask the buttons with the system color scheme or not:

[Close] Normal=close.svg Hover=close-hover.svg Pressed=close-pressed.svg Backdrop=close-backdrop.svg [Maximize] Normal=maximize.svg Hover=maximize-hover.svg Pressed=maximize-pressed.svg Backdrop=maximize-backdrop.svg [Restore] Normal=restore.svg Hover=restore-hover.svg Pressed=restore-pressed.svg Backdrop=restore-backdrop.svg [Minimize] Normal=minimize.svg Hover=minimize-hover.svg Pressed=minimize-pressed.svg Backdrop=minimize-backdrop.svg [FullScreen] Normal=fullscreen.svg [Decoration] BorderRadius=6 MaskButtons=false Supported Platforms

Platform integration for Android and macOS is still work in progress. For macOS touch bar actions integration and Android native notifications.

KIO usage on Windows and macOS is wanted, but it’s still under development.

UI Controls

TabsBrowser still under development; it will allow us to create dynamic tabs and handle the interaction quickly.

Testing

Testing more immersive UX for some applications, this has been done by making use of the new Page toolbars properties and CSD.

Below you can see a video demo of how the immersive UX concept has been applied to Maui Apps like Pix, Vvave, and Nota.

The post Maui Weekly Report 3 appeared first on Nitrux — #YourNextOS.

KWinFT packaged for openSUSE, KWin-LowLatency updated

Saturday 18th of April 2020 02:23:19 PM

The KWin Fast Track (KWinFT) project was recently announced and I’ve taken the liberty to package it for openSUSE Tumbleweed and Leap 15.2 in my home:KAMiKAZOW:KDE repository:

First review: I don’t notice a difference to regular KWin – I guess that’s a good thing for a new project.

I won’t submit the package to the KDE repository after they refused to accept KWin-LowLatency because they don’t want 3rd party packages there. They will just do the same again. If they ever change their minds, I’ll be happy to submit both again.

Speaking of which: I’ve updated KWin-LowLatency to 5.18.4-4.

Public Transport Line Metadata

Saturday 18th of April 2020 07:30:00 AM

KPublicTransport gives us access to real-time departure and journey information for many public transport systems. However, the presentation of the result isn’t ideal yet, as we are missing access to the characteristic symbols/icons and colors often associated with public transport lines.

Line Metadata

There’s a number of interesting information about public transport lines relevant for displaying:

  • The name of the line. That’s often a simple number or letter, rarely also a more complex name.
  • The icon of the line. That’s usually a fairly basic colored shape with the line name or number on it. Those tend to be used on all local signage, so showing those icons in our apps significantly helps with guiding you to the right place.
  • The name, icon and color for the product type or mode of the line, say a tram, subway or rapid transit service. Logos for those tend to be prevalent in local signage as well, e.g. the “M” for subways in France, or the white “S” on a green circle for rapid transit trains in Germany.

Query results from online backends for departures or journeys typically contain the line name, and sometimes a color (without any guarantees of whether this is a line color or a product color). Line or product logos are not provided by any backend we have so far.

So, we need a separate data source for this. The common very simple and short line names/numbers however also mean they are far from unique. Matching them against another data source requires we also have a geographic bounding box for the lines, so we can distinguish between say “U1” in Berlin and “U1” in Hamburg.

Data Sources

The data we are looking for is available in our two favorite sources, Wikidata and OpenStreetMap. We need both here, bounding boxes are only available via OSM, and logos are only available via Wikidata. Line colors and information about the type of service are available on both. The corresponding entities are linked by each having a property referring to the unique identifier in the other database.

What we are looking for in the OSM data are relations of type route_master, and their member relations of type route. A route_master in OSM is what KPublicTransport calls a “line”, ie. a set of routes that operate under the same name. In the common simple case a line consists of two routes, each being a sequence of stops in one direction.

The Wikidata items linked to an OSM route_master are ideally subclasses of type transport line or railway line, and which have a part of relation to an item describing the product or mode of transport.

Obtaining Data

The first attempt of getting the OSM data was using an Overpass query, following Julius Tens’ previous work on this. This works great on relatively small areas such as a single city, but failed to scale to world-wide coverage. Even with various optimizations, dynamically adjusting queried areas and distributing this to five servers it would have taken many hours, if not a few days to eventually complete a single run.

Instead we are now using a full local copy of the OSM database. That’s still a multi-hour download initially and needs 150+GB of SSD storage to meaningfully work with it, but it can be kept up to date fairly efficiently thanks to incremental updates, and any further experiments on that data then don’t incur cost on common infrastructure.

There’s various ways to work with the full OSM dataset locally, I ended up with a fairly crude but simple approach:

  • Use osmfilter to run basic filters on the full file, massively reducing the dataset to the elements we are interested in. This can take 15-30 minutes (largely I/O bound), but the result is a subset small enough to fit into RAM entirely and to make any further processing almost instantaneous.
  • Use osmconvert to add bounding box annotations, and subsequently drop all depending ways and nodes with osmfilter, to simplify processing later on.
  • Convert the binary file we have at this point to OSM XML using osmconvert, which can then be fed into the code that had already been written for the Overpass approach as-is.

After sorting out all incomplete elements and merging things belonging to the same line we end up with less than 2,000 Wikidata items to retrieve, which can be done easily with the basic Wikidata API, rather than using the much more expensive SPARQL queries. The same API also gives us the image metadata and license information for the few hundred line and product logo candidates.

The entire process now completes in a bit less than an hour if your local OSM copy needs updating, and about a minute otherwise, good enough for the many experimental runs needed until this produced satisfying results.

Storing Data

The above should already explain why on-demand online access isn’t an option for most of this, and we instead want to ship this information with the KPublicTransport library, not to mention the privacy advantages of this.

In the first version we are looking at about 2,500 tram, subway and rapid transit lines, but ideally we want something that can handle ten times that when also considering busses and regional as well as long-distance train lines.

Information about the lines fit into about ~25kB, excluding the logos, so that part is easy. Considering all possible logos would result in a large amount though, adding a limit to only those files not larger than 10kB each would still give us ~5MB. What we therefore ended up doing is only store the file names (~11kB), and download the files on first use from Wikidata.

That leaves the lookup of the metadata from a given name and geo coordinate. As storing bounding boxes per line and doing a sequential search isn’t going to scale, we need a spatial index for this. After a few different attempts this ended up being a Z-order curve-based quadtree. That might sound complicated but is a surprisingly simple approach: By interleaving the bits of integer representations of latitude and longitude you get a 1D representation of the 2D space (a Z-order curve). Besides being storable in simple data structures, this has interesting properties such as bit-shifting corresponding to switching depths in a quadtree. With that we get a very fast lookup, at just ~13kB of extra space cost. That is only marginally higher than storing bounding boxes for all lines for sequential search (assuming similar tricks for compact storage rather than the raw 16 bytes per line, in which case this would actually be much larger).

To put all those numbers in perspective, the JSON configurations files for the backend services included in the KPublicTransport library as QResource total about 140kB, so I think this is an acceptable result :)

Integration in KDE Itinerary

KDE Itinerary is already making use of this in a few places, such as the transfer elements, the alternative journey search and the local transport departure schedules at points of your journey.

Transfer element showing official public transport line icons. You can help!

To make this work, we depend on information available on Wikidata and OpenStreetMap. Reviewing the data there for public transport lines around your home (or other locations of interest to you) is a good way to help. I’ll write a separate post about how to do this in more detail, this is already getting too long again.

This week in KDE: our cup overfloweth with improvements

Saturday 18th of April 2020 05:46:06 AM

Three main topics will hold the floor today: Dolphin and other file management stuff, Plasma polish, and Wayland–we’re making a bit of a push on Wayland stuff so you should see more Wayland fixes going forward! For all three, we’re concentrating on fixing longstanding issues. There’s more too, of course!

Also, as you’ve no doubt noticed, I’m going to try out sending these posts on Saturday morning Europe time, instead of Sunday. Hopefully it should be a nice way to start your weekend.

Qt, range-based for loops and structured bindings

Saturday 18th of April 2020 12:00:00 AM

This post has first been published on KDAB.com

Qt has a long history. The first stable version was released before the first version of C++ was standardized and long before the different C++ compiler vendors started shipping usable implementations of the C++ standard library. Because of this, Qt often followed (and still follows) some design idioms that feel unnatural to the usual C++ developer.

This can have some downsides for the Qt ecosystem. The evolution of the C++ programming language which sped up quite a bit in the last decade often brings improvements which don’t fit well with the Qt philosophy. In this blog, I offer some ways to work with this.

Range-based for loops

C++11 brought us the range-based for loop which allows easy iteration over any iterable sequence, including common containers such as std::vector, std::map, etc.

for (auto name: names) { // ... }

When you write code like the above, the compiler (C++11) sees the following:

{ auto && __range = names; for (auto __begin = begin(names), __end = end(names); __begin != __end; ++__begin) { auto&& name = *__begin; // ... } }

It takes the begin and end iterators of the names collection, and iterates from one to the other.

You can support my work on Patreon, or you can get my book Functional Programming in C++ at Manning if you're into that sort of thing. -->

Should KDE fork CHMLib?

Friday 17th of April 2020 09:22:00 PM
CHMLib is a library to handle CHM files.

It is used by Okular and other applications to show those files.

It hasn't had a release in 11 years.

It is packaged by all major distributions.

A few weeks ago I got annoyed because we need to carry a patch in Okular flathub because the code is not great and it defines it's own int types.

I tried contacting the upstream author, but unsurprisingly after 11 years he doesn't seem to care much and got no answer.

Then i looked saw that various distributions carry different random set of patches, not great.

So I said, OK, maybe we should just fork it, and emailed 14 people listed at repology as package maintainers for CHMLib saying how they would react to KDE forking CHMLib in an effort to do some basic maintenance, i.e. get it to compile without need for patches, make sure all the patches different distributions has are in the same place, etc.

1 packager answered saying "great"
1 packager answered "nah we want to follow upstream" (... which part of upstream is dead did they not understand?)
1 person answered saying "i haven't been packaging for $DISTRO for ages" (makes sense given how old the package is)
1 person answered saying "not a maintainer anymore but i think you should not break API/ABI of CHMLib" (which makes sense, never said that was planned)

And that's it, so only 4 out of 14 answers and only one of them encouraging.

So I'm asking *YOU*, should we push for a fork or I should stop this crazyness and do something more productive?

On forking

Thursday 16th of April 2020 06:36:01 PM

This blog post expresses my personal opinion. Currently I’m not an active KWin developer, thus I think I have an outside view. I’m not contributing due to personal reasons, mostly lack of time. I would love to start contributing again, but in the current pandemic situation I won’t find the time for it. Given that I’m just an armchair developer with some background view of KWin.

As the former maintainer of KWin I’m rather shocked to read that a fellow KWin developer announced a fork. Personally I find this very disappointing and very sad. In my opinion a fork should be the last option if any other option failed. There are very few legitimate reasons and I think a fork always harms both projects due to the split of development efforts and the bad publicity such a fork creates. The announcement renders KWin in a light it doesn’t deserve. There are of course successful forks such as X.org and LibreOffice.org, but those forks were created due to serious issues with the project and had most of the developers on board. I do not see any reason in the current KWin development! For the same reason I also do not like the lowlatency fork. Of course it is an important area where KWin needs improvements, but I would love to see that happen in the KWin repository and not in an outside repository. Working together, bringing the experience together is much better than working on your own.

Personally I am very happy how KWin evolved since I stepped down as the main developer and maintainer. I was afraid that my loss in activity could not be compensated and I am very, very happy to see that development activity in KWin is much higher than it was most of the years before. Looking at the mailing list I see new names and old ones. KWin looks very healthy to me from a developer perspective.

Having read the announcement and the reasoning for the fork, I was left puzzled. What went wrong? When I looked at the mailing lists I never noticed any conflicts. In fact there is even strong agreement on the areas which need work. Such as a reworked compositor pipeline, the KWayland situation, etc. Of course we need to be careful when rewriting, reworking central parts of KWin. One of the main areas in the work for preparing KWin for Wayland was to move the code base in ways allowing to rewrite parts without risking the stability of the whole project. Personally I think this served KWin well. And even if the KWin team does not want a quick rewrite of central parts it’s no reason for a fork. This still can be handled upstream, through branches. One could even release an “experimental” KWin release with central parts reworked. Overall I just don’t get it and hope that what matters is a good KWin.

Also I must say that if I wanted to write a new Wayland compositor I would not fork KWin. KWin started as an X11 window manager and those roots will always be there. In our decision on how to handle the transition to Wayland this was a central aspect as we still have to provide an X11 window manager for those who cannot and do not want to switch. Thus we needed to evolve KWin instead of starting something new. But for a Wayland first compositor I would not start with KWin.

This blog post has comments disabled as I won’t have the time to moderate or answer.

The Internet Talks

Thursday 16th of April 2020 05:46:58 PM

I’ve previously written about the licensing and embedded talks of foss-north 2020. This time around, I’d like to share the recordings of the Internet related talks.

Internet is a very broad topic, so it is hard to classify talks as not being Internet related these days, but the following three talks stand out.

The first speaker is an old time speaker at foss-north, Daniel Stenberg. He has spoken at foss-north several times, but never about his main claim to fame: curl. This time he righted this by delivering a talks about how to Curl better.

Maintaining privacy on the Internet is a big topic. This is a field where it is hard to deliver black or white answers. Elisabet Lobo-Vesga presents DPella, a query language for differential privacy. Using this technology, it is possible to make the tradeoff between how private the user is vs how detailed the data returned is.

The Internet talks end with Patrik Fältström. One of the people who has been around the Swedish Internet scene the longest. He talks about Keeping Time. It is a journey into leap seconds, atomic clocks, the speed of light and other hassles when keeping clocks in sync over a large network.

The talks are already available on conf.tube, and the presentation material can be found by following the links to each speaker. For those of you who prefer YouTube, the talks will be made available shortly on the foss-north channel. Subscribe to get notified when they are.

Plasma Browser Integration 1.7.5

Thursday 16th of April 2020 04:00:00 PM

I’m pleased to announce the immediate availability of Plasma Browser Integration version 1.7.5 on the Chrome Web Store as well as Firefox Add-Ons page. I hope you’re all safe and well in these odd times. As you can tell from the version number this is a little more than just a maintenance release. It comes with an assortment of important bug fixes, refinements, and translation updates.

Konqi surfing the world wide web

Plasma Browser Integration bridges the gap between your browser and the Plasma desktop. It lets you share links, find browser tabs in KRunner, monitor download progress in the notification center, and control music and video playback anytime from within Plasma, or even from your phone using KDE Connect!

What’s new?

As Chrome Web Store doesn’t appear to offer uploading a changelog, I’ve created a Changelog Page on our Community Wiki instead, which is now also linked on the “About” tab of the extension settings page.

Improved media controls

As usual this release brings various improvements to the media controls feature. The most notable update is for Firefox users since we made some of its aspects work on certain pages with content security policies in place. For example, with this update Spotify Web Player on Firefox can now be controlled again. Special treatment is unfortunately needed as Firefox enforces those policies even for code injected by extensions. While it is good to see such security measures being rolled out across the web, it prevents us from interacting with websites in a way that allows us to provide this functionality. We’re still looking for ways to make it work for all attributes without code duplication but so far this is as good as it gets. Many thanks to Fabian Vogt for helping to investigate this! On top of that, this change unbreaks certain websites that rely on the HTML Audio prototype, most commonly browser games.

Additionally, an issue with audio focus stealing prevention has been corrected. The extension also better handles the case of a media playing browser tab being killed due to low memory or having crashed. However, this can probably only be fully addressed using the processes extension API which is only available in developer builds.

Downloads in recent documents Quickly access recently downloaded files without having to launch the browser or search the Downloads folder

For added convenience, Plasma Browser Integration now automatically adds downloaded files to the lists of recent documents available in various parts of the desktop when running Plasma 5.18 LTS or later. Of course this doesn’t apply to files downloaded from incognito tabs, and you can disable this feature in extension settings.

Web Share improvements Clicking a content shared confirmation to open the given link in a new tab

In its previous release Plasma Browser Integration gained support for Web Share API Level 1 through KDE’s Purpose Framework. This lets you share links, images, and other website content to a variety of services, as well as let websites trigger a share prompt by calling navigator.share.

Multiple share targets with menu icons on Firefox

In this release the “Share…” context menu entry is properly hidden when disabled in settings or while running an older version of Plasma. On Firefox, sharing a link will pass its label to the chosen service as subject/title. Also on Firefox, the context menu now shows a fitting icon when there are multiple reachable devices. Plasma 5.18.5 will also ship a fix so it no longer gets stuck whilst hovering a share target in the menu but then pressing Escape to cancel. Furthermore, after uploading content to a service that returns the shared URL, the confirmation notification can now be clicked to open said content in a new tab.

Better dark mode

Lastly, Firefox was given a light toolbar icon for the browser action when using a dark theme. This is achieved through the theme_icons key which unfortunately isn’t supported by either Chrome or any of the JavaScript extension APIs. I tried manually setting a light icon by querying @media(prefers-color-scheme: dark) but Chrome doesn’t appear to set this automatically on Linux and even if it did, doesn’t necessarily mean the toolbar is dark.

Light Plasma toolbar icon in a dark Firefox Activities coming to Firefox soon? Work in progress and not part of this release: Assigning tabs to different activities

While browsing Firefox documentation the other day I noticed it had the ability to hide tabs from the tab bar. Immediately I realized this would become very handy to add some form of activity-awareness to Firefox. While I can’t control window management, at least I could have tabs inside a window show and hide as you switch activities. With Firefox’ session APIs an activity assignment is even kept when restoring tabs after a reboot. Furthermore, tabs that are hidden can also be muted or unloaded to save resources. This is where I need your input: if you’re a heavy user of Activities, please let me now what you think this feature should behave like and how you would expect to be able to use it.

Opera Store maintainer wanted

Few of you probably knew that the extension actually supported Chromium-based Opera. I always thought you could just install it from the Chrome Web Store, like you can in Vivaldi and other Chromium-based browsers. Recently, I learned that this was not the case – at least not out of the box – so I tried to get it uploaded to the Opera Add-ons page. I had a KDE Community account created and uploaded a package. Unfortunately, despite having been signed with the same key as the Chrome extension, the extension ID turned out different. This meant that existing Plasma releases wouldn’t work with it as we have to specify which extension IDs exactly are allowed to communicate with Plasma.

While this was somewhat inconvenient, the extension then got rejected for retaining the public key in its manifest.json. Admittedly, it is only for development, but this had never been an issue for Chrome or Firefox, and more importantly, it eases development significantly when you can just git clone the repository and run it from there. That being said, if you’re good with build systems and want to maintain our presence on the Opera Add-ons store, please get in touch!

LaKademy 2019

Thursday 16th of April 2020 03:12:29 PM

Past November 2019 KDE fellows from Latin-America arrived in Salvador – Brazil to attend an one more edition of LaKademy – the Latin American Akademy. That was the 7th edition of the event (or the 8th, if you count Akademy-BR as the first LaKademy) and the second one with Salvador as host city. No problem for me: in fact I would like to move there and live in Salvador for at least a few years.

Commentary on the Qt situation

Thursday 16th of April 2020 02:07:44 PM
A lot of things have been going on with Qt these days. It all started with The Qt Company trying to get an increase in their revenues, specifically this blog post.

Introducing a command line client for GitLab

Thursday 16th of April 2020 12:00:00 AM

As KDE is currently migrating from Phabricator to GitLab, it’s a goood time to compare the advantages and disadvantages of both platforms. For phabricator, one of the disadvantages for me is the non-repository-focused structure. For example you can’t easily get to the patches submitted for a project from its repository. The same is valid for the tasks belonging to a project. This makes it harder for new contributors coming from GitHub-like platforms to submit their patches.

GitLab in turn lacks some of the power-user features of Phabricator. One of them is an easy-to-use command line client that does not merely expose the JSON api as shell commands. Since Phabricator has such a client, called arcanist, or in short, arc, which KDE developers and contributors are already used to, I started developing something similar for GitLab.

With it, for example creating a merge request is as easy as running git lab diff in a branch containing your commit(s). To keep track of open merge requests, there is the git lab list command, which lists your recently opened merge requests and their status.

For project maintainers, testing merge requests becomes easier as well. The git lab list --project command displays merge requests of only the repository the command is run in. To make testing merge requests locally easier, there is the git lab patch comand, which checks out a merge request in the local tree.

The tool already supports using multiple GitLab instances, for example invent.kde.org and gitlab.com. Which GitLab instance to use is detected from the repositories origin remote, so there is no need to add something like .arcconfig files to your repository.

In contrast to arcanist, you don’t need a full PHP runtime, python3 with a small number of packages from pypi is enough.

If you are already convinced, you can install the command using

pip3 install git+https://invent.kde.org/jbbgameich/git-lab

If you have any suggestions on how to do things better or found a bug, feel free to open issues on my repository on invent.kde.org. I especially welcome merge requests. More detailed information can be found in the README of the repository.

The KWinFT project

Wednesday 15th of April 2020 11:00:00 PM

I am pleased to announce the KWinFT project and with it the first public release of its major open source offerings KWinFT and Wrapland, drop-in replacements for KDE's window manager KWin and its accompanying KWayland library.

The KWinFT project was founded by me at the beginning of this year with the goal to accelerate the development significantly in comparison to KWin. Classic KWin can only be moved with caution, since many people rely on it in their daily computing and there are just as many other stakeholders. In this respect, at least for some time, I anticipated to be able to push KWinFT forward in a much more dynamic way.

Over time I refined this goal though and defined additional objectives to supplement the initial vision to ensure its longevity. And that became in my view now equally important: to provide a sane, modern, well organized development process, something KWinFT users won't notice directly but hopefully will benefit from indirectly by enabling the achievement of the initial goal of rapid pace development while retaining the overall stability of the product.

What is in there for you

If you are primarily a consumer of Linux graphics technology this announcement may not seem especially exciting for you. Right now and in the near future the focus of KWinFT is inwards: to formulate and establish great structures for its development process, multiplying all later development efforts.

Examples are continuous integration with different code linters, scheduled and automatic builds and tests, as well as policies to increase developer's effectiveness. This will hopefully in many ways accelerate the KWinFT progress in the future.

But there are already some experimental features in the first release that you might look out for:

  • My rework of KWin's composition pipeline that, according to some early feedback last year, improves the presentation greatly on X11 and Wayland. Additionally a timer was added to minimize the latency from image creation to its depiction on screen.
  • The Wayland viewporter extension was implemented enabling better presentation of content for example for video players and with the next XWayland major release to emulate resolution changes for many older games.
  • Full support for output rotation and mirroring in the Wayland session.
How to get KWinFT not later than now

Does this sound interesting to you? You are in luck! The first official release is already available and if you use Manjaro Linux with its unstable branch enabled, you can even try it out right now by installing the kwinft package.

And you may switch back to classic KWin in no time by installing again the kwin package. Your dependencies will be updated in both directions without further ado.

I hope KWinFT and Wrapland will soon become available in other distributions as well. But at this moment I must thank the Manjaro team for making this happen on very short notice.

Here I also want to thank Ilya Bizyaev for testing my builds from time to time in the last few weeks and giving me direct feedback when something needed to be fixed.

Future directions

For the rest of this article let me outline the strategy I will follow in the future with the KWinFT project. I will expand on these goals in upcoming blog posts as time permits.

An optimized development process

I already mentioned that defining and maintaining a healthy development process in KWinFT is an absolute focus objective for me.

This is the basis on that any future development effort can pick up momentum or if neglected will be held back. And that was a huge problem with KWin in the last year and I can say with certainty that I was not the only developer who suffered because of that.

I tried to improve the situation in the past inside KWin but the larger an organization and the more numerous the stakeholders become the more difficult it is for any form of change to manifest.

In open source software development we have the amazing advantage to be able to circumvent this blockade by rebooting a project in a fresh organisational paradigm what we call a fork. This has all so many risks and challenges as one could expect, so such a decision should not be taken lightly and needs a lot of conviction, preparation and dedication.

I won't go into anymore detail here but I plan to write more articles on what I see as current deficits in KDE's organisation of development processes, how I plan to make sure such issues won't plague KWinFT as well and in which ways these solutions could be at least partly adopted in KDE as well in order to improve the situation there too.

And while I don't have to say much positive about the current state of KDE right now, don't forget that KDE is an organisation which withstood the test of time with a history reaching back more than 20 years. An organisation that had a positive impact on many people. Such an organisation must not be slated but fostered and enhanced.

KWinFT with focus on Wayland

The project is called KWinFT because its primary offering right now is the window manager KWinFT, a fork of KWin. The strategy I will follow for this open source offering is centered around developer focus.

The time and resources of open source developers are not limitless and the window compositor is a central building piece in any desktop making it a natural point of contention.

One solution is permanently trying to make everyone happy through compromise and consensus. That is the normal pattern in large organisations. Dynamic progress is the opposite of that, instead featuring a trial-and-error approach and the sad reality that sometimes corners must be cut and hearts be broken for achieving a greater good.

Both these patterns can be valid approaches in different times and contexts. KDE naturally can only employ the first one, KWinFT is destined to employ the second one.

A major focus for the overall KWinFT project and in particular its window manager will be Wayland. That won't mean to make it just about usable at whatever the cost, but to put KWinFT's Wayland session on a rock-solid solution, rework it as often as needed, grind it out until it is right without any compromise.

And boy, does it need that! To say it bluntly, even with the risk of getting cited out of context: in 2020 still KWin and KWayland are a mess.

Sure, the superficial impression improved over time but there are many deep and fundamental flaws in the architecture that require one or better several developers and project lifetimes of not days or weeks, but months. Let me skip the details for now and instead go directly to the biggest offender.

Wrapland in modern C++ and without Qt

Wrapland was forked from KWayland alongside with KWinFT. I knew KWayland's architecture before of course already, but there is knowing and understanding. And what I have learned additionally about KWayland's internals in the last few months was shocking. And with the current vision that I follow for Wrapland I would not call Wrapland a fork anymore, rather a reboot.

The very first issue ticket I opened in Wrapland was somewhat a gamble back at that time but in hind sight quite visionary. The issue asks to remove Wrapland's Qt dependency. When I opened this ticket I wasn't aware of all the puzzle pieces, I couldn't be. But now two months later I am more convinced by that goal than ever before.

A C++ library that provides a wrapper for the C-style library libwayland is useful, a C++ library in conformance with the C++ Core Guidelines, leveraging the most current C++ standard. That means in particular without using Qt concepts like their moc, raw pointers everywhere and the prevailing use of dynamic over static inheritance.

KWinFT and potentially many other applications, for example from games with nested composition up to the UIs of large industrial machinery, could make great use of a well designed, unit tested, battle-hardened C++ Wayland library that employs modern C++ features for type and memory safety to their full extend. And that only covers Wrapland's server part. Although clients often use complete toolkit for their windowing system integration it is not hard to envision use cases where more direct access is needed and a C++ library is preferred.

The advantages by leveraging Qt in comparison would be primarily the possibility to add QML bindings. This can be useful as some interesting applications leveraging QtWayland's server part prove. But it is minuscule in KWinFT's use case. And what the compositor that Wrapland is written against does not make use of can not be a development objective of this library in the foreseeable future.

I am currently rewriting the server part of Wrapland in this spirit. You can check out the overview ticket that I created for planning the rewrite and the current prototype I am working on. Note that the development is still ongoing on a fundamental level and there might be more iterations necessary before a final structure manifests.

While the remodel of the server part is certainly exciting and I do plan something similar for the client part, this project will need to wait some more. For now I "only" reworked most of the client library to not leak memory in every second place. This allows to run unit tests on the GitLab CI for merge requests and on push in a robust manner. This rework, which contained also fixes for the server part, resulted in a massive merge with 40 commits and over 6000 changed lines.

A beacon of modern technologies

Leveraging modern language features of C++ is one objective, but a far more important one for this project is in the domain to find KWinFT is really created for: computer graphics and their presentation and organisation in an optimal way to the user.

But here I declare just a single goal: KWinFT must be at the top of every major development in this domain.

Too often in the past KWin was sidelined, or rather sidelined itself, when new technology emerged trying to catch up later on. The state of Wayland on Plasma in 2020 is testament to that. In contrast KWinFT shall be open to new developments in the larger community and if manpower permits spearhead such itself, not necessarily as a maverick but in concert with the many other great single and multi-developer projects on freedesktop.org and beyond. This leads over to the final founding principle of KWinFT.

Open to other communities and technologies

A major goal I pursued last year already as a KWin developer and that I want to expand upon with KWinFT is my commitment to building and maintaining meaningful relations with other open source communities.

Meaningful means here on one side on a personal level, like when I attended the X.Org Developer and Wine conferences on two consecutive weekends in October last year in Canada and on the way back home the Gnome Shell meetup in the Netherlands.

But meaningful also means working together, being open to the technologies other projects provide, trying to increase the interoperability instead of locking yourself into the own technology stack.

Of course in this respect the primary address that comes to mind is freedesktop.org with the Wayland and X11 projects. But also working together with teams developing other compositors can be very rewarding.

For example in Wrapland I recently added a client implementation of wlroots' output management protocol. This will allow users of wlroots-based compositors in the future to use KScreen for configuring their outputs.

I want to expand upon that by sharing more protocols and more tools with fellow compositor developers. How about an internal wlroots-based compositor for Wrapland's autotests? This would double-check not only Wrapland's protocol implementation but also wlroot's ones at the same time. If you are interested in designing such a solution in greater details check out Wrapland's contributing guideline and get in touch.

FreeBSD progress on Slimbook Base14

Wednesday 15th of April 2020 10:00:00 PM

Some notes on the – thusfar unsuccessful – work towards getting a full KDE-on-FreeBSD environment working on the Slimbook Base14. tl;dr version: “driver issues”.

Two-and-a-half years ago, I got a KDE Slimbook, and it was an excellent machine – price-competitive with similar hardware, but supporting the Free Software world. I think it came with KDE neon pre-installed, but it has run many other things in the meantime.

This Christmas, my son’s second-hand Dell laptop power brick exploded (the battery was already dead) and so there was one obvious solution: get myself a new Slimbook, and hand down the KDE Slimbook to him. So he now has my Gitlab diversity sticker, and a nopetopus, and a KDE neon installation on a fine – but somewhat battered looking – laptop.

I have a new shiny thing, the Slimbook Base 14. Again, price-competitive, Free Software positive, and a nice shiny machine. It has a Purr sticker and also a Run BSD sticker, openSUSE and adopteunchaton. Cats seem to be the thing for this laptop.

At FOSDEM I presented KDE on FreeBSD from this laptop, except it was running openSUSE at the time. It still does, most of the time, since I have not yet gotten FreeBSD to run really usefully on it. Here’s a screenshot from openSUSE, and my notes on how far I’ve gotten.

I’d like to say thanks to the Slimbook folks for being supportive while I’m figuring things out. FreeBSD laptops are a bit of a niche thing, and having more of them is good for Free Software, but probably not a huge untapped market.

Hardware notes
  • I got the Base 14 because it’s the same size as the original KDE Slimbook, which fits well on the tray table in a Dutch train (not that that is relevant during lockdown). It’s a 4C/8T 10th generation i5, although at a slow-ish 1.6GHz (thermals, I guess). Main memory of 16GB make it a capable developer machine.
  • Battery life looks like 3 hours (with a re-installed Tumbleweed) which isn’t great, so it’s less of a road-warrior device than the older Slimbook. I may have to tweak power controls more.
  • Unlike the KDE Slimbook, this has a full-size HDMI and a full-size RJ45 connector, so it’s a bit easier to get connected to the world.
  • There’s room for internal expansion – probably depending on the configuration you order, but mine can physically fit another memory module and a SATA SSD. There’s that pesky screw under the “void your warranty here” sticker.
  • Slimbook upgraded the wireless to an Intel 9560 on my order for free. It gets reported as an AC9462 in lots of tools, but the card has PCI device ID 0x02f0, which means it’s .. well, a 9462 and not clear whether the 9000 or the 22000 series Linux code supports it, and FreeBSD doesn’t recognize it at all.
How to Break Stuff
  • On power-up, when the Slimbook logo appears on-screen, hit F2 to get to the UEFI interface. You’ll need that to boot from USB, which by default happens after the installed NVMe disk. I changed the power-on timeout to 10 seconds because I don’t cold-boot often, but when I do it’s because I need to do something complicated.
  • I reinstalled openSUSE Tumbleweed on the machine, replacing the one it came with. I left 130GB free on the disk, so my partitioning looked like this (from parted(8) in openSUSE: Number Start End Size File system Name Flags 1 1049kB 525MB 524MB fat16 boot, esp 2 525MB 108GB 108GB ext4 3 248GB 250GB 2148MB linux-swap(v1) swap

    You may note that I don’t use modern Linux filesystems (btrfs). In this setup, simplicity and interoperability is more important than modern features. I do wonder if multiboot ZFS could be a thing.

  • Several attempts to install FreeBSD with 13-CURRENT and 12.0 memory sticks failed. I didn’t write down what went wrong, though I remember the bootloader not installing, or EFI partitions not being found. It was a mess, which is why I ended up with a freshly-reinstalled openSUSE and vague hope that the FreeBSD Devsummit before FOSDEM would help.
  • At the devsummit, I borrowed a 12.1 installer (regular memstick image) and that did install. I picked UFS because I’m not really comfortable putting ZFS on a partition, and wrestling with the partitioning section of bsdinstall is anyway not my idea of fun.
  • The partitioner tells me I need an EFI partition, and adds a 200MB partition to the table – even though there is already one there, from the Linux installation. I deleted the “new” one before continuing.
  • FreeBSD 12.1 installs just fine, and I ended up with this partitioning (again, with parted(8) from openSUSE): Number Start End Size File system Name Flags 1 1049kB 525MB 524MB fat16 boot, esp 2 525MB 108GB 108GB ext4 5 109GB 141GB 32.0GB root 3 248GB 250GB 2148MB linux-swap(v1) swap
13-CURRENT

I just built r359804, FreeBSD-CURRENT, on the laptop itself. NFS mounted /usr/src over 1GbE to my NAS, and kicked off make -j4 buildworld && make -j4 buildkernel. It started at 13:21 today, and ran until 16:49. Keep in mind that this rebuilds the entire operating system from the ground up, including Clang/LLVM, and over NFS.

The fans run fast, and I kept the machine balanced on some pens for extra airflow; basically it churns away at things until it’s time to mergemaster -p -FU.

This is dmesg from starting this particular build. It finds most things, but .. (ignore the iwm messages, that’s from my attempt at bunging the wrong firmware onto the wifi card).

Usable as a Server

Without WiFi, the laptop is not very portable and sits attached to an ethernet cable on my desk.

The graphics is 10th-gen Intel iGPU, which is not yet supported by FreeBSD. There’s shared infrastructure between Linux and FreeBSD – basically, the Linux API has been implemented in the FreeBSD kernel and the graphics team periodically imports the “upstream” Linux modules. This is a case of not spending engineering effort duplicating things – manufacturers are willing to support Linux, so let’s ride those coattails.

Without graphics, the laptop doesn’t run a graphical desktop environment very well, so it sits on my desk attached to an ethernet cable, in text mode.

Suspend .. yes, this laptop can be suspended. For instance, running zzz as root does the job. Suspend is never an issue for FreeBSD. Resume on the other hand, is. Related to the graphics problem: after resume the screen never comes back on, even if I can ssh in.

Future Laptop Use?

The hardware issues I have with this laptop are just a matter of time. There’s work on the WiFi drivers going on in the FreeBSD wireless team, and desktop / graphics updates happen regularly. Where I can, I’m helping test or develop for the hardware I have. Kernel stuff isn’t really my cup of tea.

In the meantime, I’m using the laptop to experiment with Qt’s framebuffer support. You can run Qt applications outside of X11 and Wayland by using the framebuffer provided by the OS, if there is a QPA (Qt Platform Abstraction) for your system.

The Linux framebuffer QPA offers a lot of features; the FreeBSD one does graphics, and that’s it. Looking at the code, I see that it can use tslib, but that’s about touchscreens-as-event-sources. I fixed the library to build on FreeBSD, wrote a port for it, built Qt against it, and then realised that tslib in itself doesn’t provide keyboard or mouse input.

I’m working on that keyboard and mouse support. Now that FreeBSD has libinput and a BSD-shim to the Linux evdev API, (which it didn’t when the FreeBSD QPA was written), I can probably just duplicate most of the LinuxFB QPA in terms of event handling. That, in turn, would mean that full Qt applications with “normal” Qt interaction could be supported on the framebuffer. Here’s a screenshot – er .. an actual photo since Spectacle doesn’t yet work – of Calamares running on the laptop in the FreeBSD framebuffer.

Consider it a teaser for things to come.

Don't miss Akademy 2020 — This Year KDE is going Online!

Wednesday 15th of April 2020 08:00:00 AM

The KDE Community will be hosting Akademy 2020 online between Friday 4th and Friday 11th September.

The conference is expected to draw hundreds of attendees from the global KDE Community. Participants will showcase, discuss and plan the future of the Community and its technology. Members from the broader Free and Open Source Software community, local organizations and software companies will also attend.

Akademy 2020 Program

Akademy 2020 will begin with virtual training sessions on Friday 4 September. This will be followed by a number of talk sessions held on Saturday 5 Sept. and Sunday 6 Sept. The remaining 5 days will be filled with workshops and Birds of a Feather (BoFs).

A Different Akademy

Due to the unusual circumstances we are living through and the need to keep the KDE Community healthy, thriving, and safe, the Akademy Team have decided to host Akademy 2020 online. During the program organization period for this year's activities, we took into consideration multiple timezones to ensure that, regardless of physical location, every member of the KDE Community can participate in as many conference activities as they like.

Despite not being able to meet in person this year, KDE members will be able to reach an even wider audience and more people will be able to attend and watch the live talks, learn about the workings of the technology and the Community by participating in Q&As and panels.

Registrations and Call for Papers will be opening soon!

Dot Categories:

More in Tux Machines

Android Leftovers

Security-Oriented Kodachi Linux 7.2 Released with One of the Best Secure Messengers

Based on the latest Xubuntu 18.04 LTS point release, Kodachi Linux 7.2 codename “Defeat” comes with the newest Ubuntu kernel that’s patched against recent security vulnerabilities and full sync with the upstream Bionic Beaver repositories to provide users with an up-to-date installation media. On top of that, the new release introduces new security features, such as Session Messenger, a popular private messenger that the Kodachi Linux team doubts as one of the best secure messengers and the Steghide UI utility for hiding encrypted text messages in images, text or audio files. Read more

Linux and Linux Foundation: 5.9 Kernel and LF Edge

  • Intel SERIALIZE, Dropping Of SGI UV Supercomputer, i386 Clang'ing Hit Linux 5.9

    A number of x86-related changes were sent out today for the first full day of the Linux 5.9 merge window. 

  •         
  • Btrfs Seeing Some Nice Performance Improvements For Linux 5.9

    With more eyes on Btrfs given the file-system is set to become the default for Fedora 33 desktop spins, there are some interesting performance optimizations coming to Btrfs with the in-development Linux 5.9 kernel.  On the performance front for Btrfs in Linux 5.9 there are optimized helpers for little-endian architectures to avoid little/big endian conversions around the on-disk format, tree-log/fsync optimizations yielding around a 12% lower maximum latency for the Dbench benchmark, faster mount times for large file-systems in the terabyte range, and parallel fsync optimizations. 

  • As IoT Continues to Evolve, LF Edge Explores the Edge Continuum in a New White Paper

    Earlier this month, LF Edge, an umbrella organization under The Linux Foundation, published a white paper updating the industry on their continued ecosystem collaboration. LF Edge brings together projects within the Foundation, that “aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.”

Open Hardware With Arduino: Counter and MKR ZERO

  • Keep track of your laps in the pool with this Arduino counter

    PeterQuinn925 swims for exercise, and to train for the occasional triathlon, but when doing so he often zones out and forgets how many laps he has swam. To solve this problem without spending a lot of money on a commercial solution, he created his own counter using an Arduino Nano and an ultrasonic sensor. The sensor detects when a swimmer approaches, and the system calculates distance based on this, assuming that a lap is roughly 50 yards or meters. This info is announced audibly via a speaker/amplifier using an Arduino speech library and is shown on a 7-segment display.

  • Recreating Rosie the Robot with a MKR ZERO

    While 2020 may seem like a very futuristic year, we still don’t have robotic maids like the Jetsons’ Rosie the Robot. For his latest element14 Presents project, DJ Harrigan decided to create such a bot as a sort of animatronic character, using an ESP8266 board for interface and overall control, and a MKR ZERO to play stored audio effects. The device features a moveable head, arms and eyes, and even has a very clever single-servo gear setup to open and close its mouth.