Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 4 hours 53 min ago

KDE Usability & Productivity: Week 81

Sunday 28th of July 2019 06:01:50 AM

Here’s week 81 in KDE’s Usability & Productivity initiative! And boy is there some delicious stuff today. In addition to new features and bugfixes, we’ve got a modernized look and feel for our settings windows to show off:

Doesn’t it look really good!? This design is months in the making, and it took quite a bit of work to pull it off. I’d like to thank Marco Martin, Filip Fila, and the rest of the KDE VDG team for making it happen. This is the first of several user interface changes we’re hoping to land in the Plasma 5.17 timeframe, and I hope you’ll like them!

In the meantime, look at all the other cool stuff that got done this week:

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

KDE Connect mDNS: Nuremberg Megaspring Part 2

Saturday 27th of July 2019 11:19:01 AM

Sprints are a great time to talk in real-time to other project developers. One of the things we talked about at the KDE Connect part of the “Nuremberg Megasprint” was the problem that our current discovery protocol often doesn’t work, since many networks block the UDP broadcast we currently use. Additionally, we often get feature requests for more privacy-conscious modes of KDE Connect operation. Fixing either of these problems would require a new Link Provider (as we call it), and maybe we can fix both at once.

A New Backend

First, let’s talk about discovery. The current service discovery mechanism in KDE Connect is we send a multicast UDP packet to the current device’s /24 subnet. This is not ideal, since some networks are not /24, and since many public networks block packets of this sort. Alternatively, you can manually add an IP address which then establishes a direct connection. Manual connections work on many networks with block UDP, but it is a bit of a hassle. Can we find a better way to auto-discover services?

A few months ago, a user named rytilahti posted two patches to our Phabricator for KDE Connect service advertisement over mDNS (aka avahi, aka nsd, aka …). The patches were for advertisement-only (it still doesn’t establish a connection) but they were a good proof of concept to show that mDNS works on many institutional networks which block UDP multicast since mDNS is frequently used for other things like network printer discovery which are desired by those institutional networks.

I would post a screenshot here, but I don’t want to spread details of an internal network too far

At the sprint, we talked about whether we would like to move forward with these and we decided it was useful, so Albert Vaca and I put together two proof of concept patches to start trying to establish a connection using mDNS advertisements:

The patches are not yet fully working. We can both see each other and attempt to establish a connection but then something goes wrong and one of them crashes. Given that this was less than 8 hours of work, I would call this a success!

There is still plenty to do, but it was very helpful to be able to sit in-person and talk about what we wanted to accomplish and work out the details of the new protocol.

More Privacy

Before we talk about privacy, it helps to have a quick view of how KDE Connect currently establishes a connection:

  • As described above, both devices send a multicast UDP packet. This is what we call an “Identity Packet”, where each device send its name, capabilities (enabled plugins), and unique ID
  • If your device receives an identity packet from a device it recognizes, it establishes a secure TCP connection (if both devices open a connection, the duplicate connection is handled and closed)

As long as we are talking about a new backend, let’s think about ways to make KDE Connect more privacy-conscious. There are two problems to address:

  • Device names often contain personal information. For instances “Simon’s Phone” tells you that “Simon” is around
  • Device IDs are unique and unchanging. Even assuming I rename my phone, you can still track a particular device by checking for the same ID to show up again and again

Solving the first problem is easy. We want the user’s device name so we can display it in the list of available devices to pair with. So, instead of sending that information in the identity all the time, have some “discovery mode” switch which otherwise withholds the device name until a connection to an already-trusted device is established.

This leaves the second problem, which quite a bit more tricky. One answer is to have trusted user-selected trusted wifi networks, so KDE Connect doesn’t broadcast on a random wifi that the user connects to. But what if I connect to, say, my university network where I want to use KDE Connect but I don’t want to tell everyone that I’m here?

We don’t have a final answer to this question, but we discussed a few possible solutions. We would like some way of verifying ourselves to the other device which conceals our identity behind some shared secret, so the other device can trust that we are who we say we are, but other devices can’t fingerprint us. It is a tricky problem but not yet one to solve. Step 1 is to get the new mDNS backend working, step 2 is to add advanced features to it!

3D – Interactions with Qt, KUESA and Qt Design Studio, Part 2

Friday 26th of July 2019 12:15:43 PM

In my last post, I went through a method of creating a simulated reflection with a simple scene. This time I’d like to take the technique and apply it to something a bit more realistic. I have a model of a car that I’d like to show off in KUESA™, and as we know, KUESA doesn’t render reflections.

Using the method discussed last time, I’d need to create an exact mirror of the object and duplicate it below the original, and have a floor that was partially transparent. This would be expensive to render if I duplicated every single mesh on the high poly model. Here, the model is X triangles – duplicating it would result in Y triangles – is it really worth it for a reflection?

However, if I could take a low poly version of the mesh and make it look like the high poly mesh this could then work. I could bake the materials onto a low poly mesh. Easy to do in blender.

Here I have a low poly object as the created reflection, with the baked materials giving the impression of much higher detail than physically exists. Using the same technique of applying an image texture as an alpha channel, I can fade out the mesh below to look more like a reflection.

To further the illusion, I can also blur the details on the baked materials – this, along with the rough texture of the plane, gives more of a reflective look, especially when you consider that a rough plane would not give an exact reflection:

The good news is, it produces an effect similar to a reflection, which is what we want.

This is all very well for a static model, but how can I deal with an object that has animations, such as a car door opening? I’ll look into this in the next post.

The post 3D – Interactions with Qt, KUESA and Qt Design Studio, Part 2 appeared first on KDAB.

KDE First Contributions and first sprint

Friday 26th of July 2019 12:03:00 PM

I have been a KDE User for more than 10 years. I really love KDE community, Plasma and its apps. I have been reading eagerly Nate Graham's blog He gave me the inspiration to start contributing.

It has been a opportunity to learn some C++, Qt and some Qml.

So far I am very happy to have contributed a few features and bug fixes :

And a few less noticeable as well.

And I have quite a lot more in the back of my head. I am close to completing adding click-to-play to video and audio previews in dolphin information panel.

Usablity & Productivity Sprint in Valencia

I participated last month to the Usablity & Productivity Sprint in Valencia. I have been very happy to meet some great KDE community members from three continents and 9 countries.

There I improved the kinfocenter consistency thanks to the help of Filip and Marco and added a link from the system battery plasmoid to the corresponding kinfocenter kcm. I started to work on a new recently used document ioslave that will match the same feature as in kicker/kickoff. Adding some consistency and activity awareness to dolphin and KDE Open/Save dialogs. I learned about Kwin debugging with David Edmundson. And I had great discussions with the people there.

I am going to Akademy 2019 !

And since I am a big fan of rust-lang, this will be a nice opportunity to debate on the matter and on the future of KDE.

Nürnberg Sprint and KDE Itinerary Browser Integration

Friday 26th of July 2019 07:00:00 AM

Getting everyone interested/involved in a specific area into a room for a few days with no distraction is a very effective way to get things done, that’s why we do sprints in KDE since many years. Another possible outcome however can be that we end up with some completely unexpected results as well. Here is one such example.

Sprint

Last weekend we had the combined KDE Connect, Streamlined Onboarding and KWin sprints in Nürnberg. SUSE kindly provided us with rooms in their office (as well as coffee and the occasional pizza), and KDE e.V. had made sure we didn’t have to sleep on a park bench somewhere.

I really like the recent trend of “sprint pooling”, that is combining a few more or less independent topics at the same time and location. This does not only reduce travel overhead, it also helps to avoid silo teams and fosters cross-team collaboration. While it’s not a full replacement for doing this at a larger scale at the Randa Meetings, it’s a massive improvement over isolated sprints.

And it made me attend three sprints I’d probably not attended individually, as I’m not deeply involved enough in either topic. That however turned out very much worth it.

KDE Itinerary Browser Integration

While there were many important discussions, achievements and decisions as other reports on Planet KDE show, I’d like to talk about one in particular here, the experiments on extracting data relevant for KDE Itinerary via a browser extension. You might notice that this isn’t in scope for any of the official sprint topics, but that’s exactly what happens when you bring together people from different areas, in this case Kai from Plasma Browser Integration and myself from KDE Itinerary.

KDE Itinerary so far gathers most of its data in the email client, Kai’s idea was to also consider the browser for this. That makes sense, as it’s where you do most of the actual booking operations. However, unlike email website can be very dynamic making them hard to capture and analyze to get an idea who viable that idea actually is.

Having someone knowing how to develop browser extensions and someone knowing the structured data patterns we need to look for sitting next to each other for a day for this enables quite some progress. As a start we (and by we I mean Kai) wrote a small browser extension that would look for the schema.org and IATA BCBP patterns we could generically extract. If those would be found in sufficient numbers we actually would have a viable way to get relevant data, without needing a completely unscalable web scraping approach.

My initial expectation was we’d need to run this extension on a couple of machines for a few weeks until we had some initial numbers. It turned out I was very very wrong. The extension almost immediately started to report results, it looks like the majority of hotel chain websites and hotel booking sites have schema.org annotations, as well as many restaurant and event booking sites, next to a relevant number of individual hotel and restaurant sites. So, definitely worth pursuing this approach.

Of course the actual development work only just starts now, and there is still a lot of work ahead of us to get this to a point where it provides value, but we have come up with an approach and validated it in a tiny fraction of the time it would have taken any one of us individually.

Contribute

If you find the idea of a Free Software and designed for privacy digital travel assistant appealing but so far have shied away from helping out because you are more familiar with web technologies than C++, the browser integration is a great way to get in :)

Another way to help is by enabling KDE e.V. (and its sister organizations like the Krita Foundation) to support such activities, financially by donations, by becoming a supporting member or a corporate sponsor, or logistically by providing a suitable venue for such events for example.

Welcome to KDE: Nuremberg Megasprint Part 1

Thursday 25th of July 2019 09:18:23 AM

Now that it has been over half a year since I started this blog, it is time to address one of the topics that I promised to address at the beginning: How I got started with KDE. I will do this in the context of the “Nuremberg Megasprint” which combined a KDE Connect sprint, a KDE Welcome / Onboarding sprint, and a KWin sprint.

At the Onboarding sprint, we were talking mostly about ways to make it easier for developers new to KDE to work on our software. Currently the path to getting that working is quite convoluted and pretty much requires that a developer read the documentation (which often doesn’t happen). We agreed that we would like the new developer experience to be easier. I don’t have a lot to say about that, but keep an eye on the Planet for an idea of what was actually worked on! Instead, since I am a relatively new KDE contributor, I will tell the story of how I got started.

I started using Plasma as a desktop environment around 2012, shortly after Ubuntu switched from Gnome 2, which I liked, to Unity, which I disliked. I tried playing with Mate and Cinnamon for Ubuntu, but I didn’t find either one was what I wanted. I had heard that KDE existed, but I didn’t know anything about it, so I gave it a try as well.

At the time, my main computer was an HP TM2-2000-series laptop, with a quite respectable 4GB RAM, some decent dual-core, first-generation Intel i5, and a little AMD GPU (which I could never get to work under Linux). But most importantly, it had a touchscreen with a capacitive digitizer for fingers, some styluses, or carrots (which usually work better than the special styluses) and a built-in Wacom digitizer for taking notes in class using the special pen.

An HP TM-2 Laptop, Almost in Tablet Mode

Plasma was nice to use on the touchscreen but most importantly, it had a built-in hotkey to disable the capacitive digitizer so I could write notes using the Wacom pen without my palm constantly messing everything up. It may sound silly, but that is literally the reason I started using KDE software!

Kubuntu came packaged with KDE Connect, which I was very excited by. Could I write SMS from the desktop without touching my phone and without installing questionable software? At the time, no. This was practically the first release of KDE Connect. It still had cool features, so I still loved it, but replying to SMS didn’t come until later.

Fast-forward the clock a couple of years. KDE Connect has had reply-to SMS features for awhile, but something was wrong. If you wrote a “long” SMS, KDE Connect would appear to accept it but then silently the message would never be sent. How curious! Since the only thing you could do was reply, it was hard to reproduce what was happening. I also noticed that KDE Connect had some work-in-progress, unreleased Telepathy plugin.

I started trying to set up Telepathy so that I would be able to send messages as well as just reply to them. I was able to get the plugin set up, which had (and still has, unfortunately) the very basic feature that you could enter a phone number and see messages sent and received in that “chat”, with no history or contacts-matching. Once I had the ability to send messages from KDE Connect, I noticed that any message I sent which was longer than 1 SMS (~140 bytes) would go missing.

At this point, the only software I had built was the Telepathy plugin (none of the core parts of KDE Connect). Luckily, the Android app is not difficult to build and debug. I followed the message I was trying to send through the app into an Android system call which was clearly for sending a single SMS (and apparently fails silently if the message is too long). I tweaked that part of the code to use the Android way of sending a multi-part SMS, posted the patch (to the mailing list, because I didn’t know Phabricator was the way to go since I hadn’t read the contributor documentation) and I have been hooked ever since.

Building the desktop app was more of a problem and is a better story to tell in the context of onboarding. I couldn’t figure out what cmake flags I needed. I am using Fedora, so I downloaded the source RPM to see if that would help me. I also couldn’t figure out how to read that, but I *did* figure out how to re-build the RPM with new sources. So, for about the first 8 months of my time in KDE, my workflow was:

  • Make a change
  • Rebuild the RPM (which took a relatively long time, even on my fairly fast computer)
  • Install the new RPM
  • Try to figure out why my change wasn’t working

Needless to say, this path was very cumbersome. Luckily, about this time, someone updated the KDE Connect wiki with the proper cmake flags to use!

After a certain amount of effort, I can now run KDE Connect in Eclipse, with the integrated debugger view (Note to readers: I recommend a different IDE for KDE/Qt development. Eclipse requires lots of manual configuration. Try KDevelop!)

kdeconnectd, in Eclipse, paused in the debugger

That’s all for this post. I think it’s clear to say that my road to KDE development was far from straightforward. Hopefully we can make that road smoother in the future!

Back from vacation 2019

Wednesday 24th of July 2019 10:00:00 PM

After 795km on my bicycle, I’m back in the saddle – or, rather, back on my reasonably-comfortable home-office chair. Time to pick up the bits and pieces and dust off the email client!

Some notes and bits from the trip:

  • It takes about a week to settle into a camping stride, for packing in the morning, getting things onto the bike, and getting going. Daily startup time dropped from 3 hours to 1.5 hours by the end of the trip. What this means: practice makes perfect.
  • Fromage Frais, in particular Bibeleskaes, is the foundation on which well-fed bike trips are built on.
  • The Moselle / Moesel river is lovely, even more so when biking along it. It would have been nice if it was 15 degrees colder, with France and Germany sagging under an unprecedented heatwave.
  • Every. Single. French. Person. says bon jour! when passing by on a bike, people move aside when you ring a bell or even whistle to overtake. It felt amazingly friendly.
  • Not every boulangerie makes a decent croissant.
  • The best local beer I had was L’ogresse rousse.
  • After two weeks of nothing but bike paths and rural France, the city of Trier is over-crowded, noisy, gross, and feels downright dangerous. Also has no concept of decent bike path markings to get you to the train station.
  • There don’t seem to be any BSD developers along the Moselle.
  • A tent does not provide protection from nuclear fallout. I learned this from the campground safety poster near the Thionville reactors.

No photos, since my low-end phone takes really bad shots.

Anyway, I’m glad to be back home, with bug bites and a slight sunburn.

KDE Onboarding Sprint Report

Wednesday 24th of July 2019 07:08:36 PM

These are my notes from the onboarding sprint. I had to miss the evenings, because I’m not very fit at the moment, so this is just my impression from the days I was able to be around, and from what I’ve been trying to do myself.

Day 1

The KDE Onboarding Sprint happened in Nuremberg, 22 and 23 July. The goal of the sprint was to come closer to making getting started working on existing projects in the KDE community easier: more specifically, this sprint was held to work on the technical side of the developer story. Of course, onboarding in the wider sense also means having excellent documentation (that is easy to find), a place for newcomers to ask questions (that is easy to find).

Ideally, an interested newcomer would be able to start work without having to bother building (many) dependencies, without needing the terminal at first, would be able to start improving libraries like KDE frameworks as a next step, and be able to create a working and installable release of his work, to use or to share.

Other platforms have this story down pat, with the proviso that these platforms promote greenfield development, not extending existing projects, as well as working within the existing platform framework, without additional dependencies:

  • Apple: download XCode, and you can get started.
  • Windows: download Visual Studio, and you are all set.
  • Qt: download the Qt installer with Qt Creator, and documention, examples and project templates are all there, no matter for which platform you develop: macOS, Windows, Linux, Android or iOS.

GNOME Builder is also a one-download, get-started offering. But Builder adds additional features to the three above: it can download and build extra dependencies (with atrocious user-feedback, it has to be said), and it offers a list of existing GNOME projects to start hacking on. (Note: I do not know what happens when getting Builder on a system that lacks git, cmake or meson.)

KDE has nothing like this at the moment. Impressive as the kdesrc-build scripts are technically (thanks go to Michael Pyne for giving an in-depth presentation), they are not newcomer-friendly, with a complicated syntax, configuration files and dependency on working from the terminal. KDE also has much more diversity in its projects than GNOME:

  • Unlike GNOME, KDE software is cross-platform — though note that not every person present at the sprint was convinced of that, even dismissing KDE applications ported to Windows as “not used in the real world”.
  • Part of KDE is the Frameworks set of additional Qt libraries that are used in many KDE projects
  • Some KDE projects, like Krita, build from a single git repository, some projects build from dozens of repositories, where adding one feature, means working on half a dozen features at the same time, or in the case of Plasma replaces the entire desktop on Linux.
  • Some KDE projects are also deployed to mobile systems (iOS, Android, Plasma Mobile)

Ideally, no matter the project the newcomer selects, the getting-started story should be the same!

When the sprint team started evaluating technologies that are currently used in the KDE community to build KDE software, things started getting confused quite a bit. Some of the technologies discussed were oriented towards power users, some towards making binary releases. It is necessary to first determine which components need to be delivered to make a seamless newcomer experience possible:

  • Prerequisite tools: cmake, git, compiler
  • A way to fetch the repository or repositories the newcomer wants to work on
  • A way to fetch all the dependencies the project needs, where some of those dependencies might need to transition from dependency to project-being-worked on
  • A way to build, run and debug the project
  • A way to generate a release from the project
  • A way to submit the changes made to the original project
  • An IDE that integrates all of this

The sprint has spent most of its time on the dependencies problem, which is particularly difficult on Linux. An inventory of ways KDE projects “solve” the problem of providing the dependencies for a given project currently includes:

  • Using distribution-provided dependencies: this is unmaintainable because there are too many distributions with too much variation in the names of their packages to make it possible to keep full and up-to-date lists per project — and newcomers cannot find the deps from the name given in the cmake find modules.
  • Building the dependencies as CMake external projects per project: is only feasible for projects with a manageable number of dependencies and enough manpower to maintain it.
  • Building the dependencies as CMake external projects on binary factory, and using a docker image identical to the one used on the binary factory + these builds to develop the project in: same problem.
  • Building the (KDE, though now also some non-KDE) dependencies using kdesrc-build, getting the rest of the dependencies as distribution packages: this combines the first problem with fragility and a big learning curve.
  • Using the KDE flatpak SDK to provide the KDE dependencies, and building non-KDE dependencies manually, or fetching them from other flatpak SDK’s. (Query: is this the right terminology?) This suffers from inter- and intra-community politicking problems.

ETA: I completely forgot to mention craft here. Craft is a python based system close to emerge that has been around for ages. We used it initially for our Krita port to Windows; back then it was exclusively Windows oriented. These days, it also works on Linux and Windows. It can build all KDE and non-KDE dependencies that KDE applications need. But then why did I forget to mention it in my write-up? Was it because there was nobody at the sprint from the craft team? Or because nobody had tried it on Linux, and there was a huge Linux bias in any case? I don’t know… It was discussed during the meeting, though.

As an aside, much time was spent discussing docker, but when it was discussed, it was discussed as part of the dependency problem. However, it is properly a solution for running a build without affecting the rest of the developers system. (Apparently, there are people who install their builds into their system folders.) Confusingly, part of this discussion was also about setting environment variables to make it possible to run their builds when installed outside the system, or uninstalled. Note: the XDG environment variables that featured in this discussion are irrelevant for Windows and macOS.

Currently, newcomers for pim, frameworks or plasma development are pointed towards kdesrc-build (https://community.kde.org/Get_Involved/development, four clicks from www.kde.org), which not unexpectedly, leads to a heavy support burden in the kde development communication channels. For other applications, like Krita, there are build guides (https://docs.krita.org/en/contributors_manual.html, two clicks from krita.org), and that also is not a feasible solution.

As a future solution, Ovidiu Bogdan presented Conan, which is a cross-platform binary package manager for C++ libraries. This could solve the dependency problem, and only the dependency problem, but at the expense of making the run problem much harder because each library is in its own location. See https://conan.io/ .

Day 2

The attendendants decided to try to tackle the dependency problem. A certain amount of agreement was reached on acknowledging that this is a big problem, so this was discussed in-depth. Note again, that the focus was on Linux again, relegating the cross-platform story to second place. Dmitry noted that when he tries to recruit students for Krita, only one in ten is familiar with Linux, pointing out we’re really limiting ourselves with this attitude.

A KDE application, kruler, was selected, as a prototype, for building with dependencies provided either by flatpak or conan.

Dmitry and Ovidiu dug into Conan. From what I observed, laying down the groundwork is a lot of work, and by the end of the evening, Dmitry and Ovidiu has packaged about half of the Qt and KDE dependencies for kruler. Though the Qt developers are considering moving to Conan for Qt’s 3rdparty deps, Qt in particular turned out to be a problem. Qt needs to be modularized in Conan, instead of being a big, fat monolith. See https://bintray.com/kde.

Aleix Pol had already made a begin integrating flatpak and docker support into KDevelop, as well as providing a flatpak runtime for KDE applications (https://community.kde.org/Flatpak).

This made it relatively easy to package kruler, okular and krita using flatpak. There are now maintained nightly stable and unstable flatpak builds for Krita.

The problems with flatpak, apart from the politicking, consist in two different opinions of what an appstream should contain, checks that go beyond what freedesktop.org standards demand, weird errors in general (you cannot have a default git branch tag that contains a slash…) an opaque build system and an appetite for memory that goes beyond healthy: my laptop overheated and hung when trying to build a krita flatpak locally.

Note also that the flatpak (and docker) integration in KDevelop are not done yet, and didn’t work properly when testing. I am also worried that KDevelop is too complex and intimidating to use as the IDE that binds everything together for new developers. I’d almost suggest we repurpose/fork Builder for KDE…

Conclusion

We’re not done with the onboarding sprint goal, not by a country mile. It’s as hard to get started with hacking on a KDE project, or starting a new KDE project as has ever been. Flatpak might be closer to ready than conan for solving the dependency problem, but though flatpak solves more problems than just the dependency problem, it is Linux-only. Using conan to solve the dependency problem will be very high-maintenance.

I do have a feeling we’ve been looking at this problem at a much too low level, but I don’t feel confident about what we should be doing instead. My questions are:

* Were we right on focusing first on the dependency problem and nothing but the dependency problem?
* Apart from flatpak and conan, what solutions exist to deliver prepared build environments to new developers?
* Is kdevelop the right IDE to give new developers?
* How can we make sure our documentation is up to date and findable?
* What communication channels do we want to make visible?
* How much effort can we afford to put into this?

DBus connection on macOS

Wednesday 24th of July 2019 04:18:14 PM
What is DBus

DBus is a concept of software bus, an inter-process communication (IPC), and a remote procedure call (RPC) mechanism that allows communication between multiple computer programs (that is, processes) concurrently running on the same machine. DBus was developed as part of the freedesktop.org project, initiated by Havoc Pennington from Red Hat to standardize services provided by Linux desktop environments such as GNOME and KDE.

In this post, we only talk about how does DBus daemon run and how KDE Applications/Frameworks connect to it. For more details of DBus itself, please move to DBus Wiki.

QDBus

There are two types of bus: session bus and system bus. The user-end applications should use session bus for IPC or RPC.

For the DBus connection, there is already a good enough library named QDBus provided by Qt. Qt framework and especially QDBus is widely used in KDE Applications and Frameworks on Linux.

A mostly used function is QDBusConnection::sessionBus() to establish a connection to default session DBus. All DBus connection are established through this function.

Its implementation is:
1
2
3
4
5
6
QDBusConnection QDBusConnection::sessionBus()
{
if (_q_manager.isDestroyed())
return QDBusConnection(nullptr);
return QDBusConnection(_q_manager()->busConnection(SessionBus));
}

where _q_manager is an instance of QDBusConnectionManager.

QDBusConnectionManager is a private class so that we don’t know what exactly happens in the implementation.

The code can be found in qtbase.

DBus connection on macOS

On macOS, we don’t have a pre-installed dbus. When we compile it from source code, or install it from HomeBrew or somewhere, a configuration file session.conf and a launchd configuration file org.freedesktop.dbus-session.plist are delivered and expected to install into the system.

session.conf

In session.conf, one important thing is <listen>launchd:env=DBUS_LAUNCHD_SESSION_BUS_SOCKET</listen>, which means socket path should be provided by launchd through the environment DBUS_LAUNCHD_SESSION_BUS_SOCKET.

org.freedesktop.dbus-session.plist

On macOS, launchd is a unified operating system service management framework, starts, stops and manages daemons, applications, processes, and scripts. Just like systemd on Linux.

The file org.freedesktop.dbus-session.plist describes how launchd can find a daemon executable, the arguments to launch it, and the socket to communicate after launching daemon.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>org.freedesktop.dbus-session</string>

<key>ProgramArguments</key>
<array>
<string>/{DBus install path}/bin/dbus-daemon</string>
<string>--nofork</string>
<string>--session</string>
</array>

<key>Sockets</key>
<dict>
<key>unix_domain_listener</key>
<dict>
<key>SecureSocketWithKey</key>
<string>DBUS_LAUNCHD_SESSION_BUS_SOCKET</string>
</dict>
</dict>
</dict>
</plist>

Once the daemon is successfully launched by launchd, the socket will be provided in DBUS_LAUNCHD_SESSION_BUS_SOCKET env of launchd.

We can get it with following command:
1
launchctl getenv DBUS_LAUNCHD_SESSION_BUS_SOCKET

Current solution in KDE Connect

KDE Connect needs urgently DBus to make, the communication between kdeconenctd and kdeconnect-indicator or other components, possible.

First try

Currently, we delivered dbus-daemon in the package, and run

1
./Contents/MacOS/dbus-daemon --config-file=./Contents/Resources/dbus-1/session.conf --print-address --nofork --address=unix:tmpdir=/tmp

--address=unix:tmpdir=/tmp provides a base directory to store a random unix socket descriptor. So we could have serveral instances at the same time, with different addresse.

--print-address can let dbus-daemon write its generated, real address into standard output.

Then we redirect the output of dbus-daemon to
KdeConnectConfig::instance()->privateDBusAddressPath(). Normally, it should be $HOME/Library/Preferences/kdeconnect/private_dbus_address. For example, the address in it is unix:path=/tmp/dbus-K0TrkEKiEB,guid=27b519a52f4f9abdcb8848165d3733a6.

Therefore, our program can access this file to get the real DBus address, and use another function in QDBus to connect to it:
1
QDBusConnection::connectToBus(KdeConnectConfig::instance()->privateDBusAddress(), QStringLiteral(KDECONNECT_PRIVATE_DBUS_NAME));

We redirect all QDBusConnection::sessionBus to QDBusConnection::connectToBus to connect to our own DBus.

Fake a session DBus

With such solution, kdeconnectd and kdeconnect-indicator coworks well. But in KFrameworks, there are lots of components which are using QDBusConnection::sessionBus rather than QDBusConnection::connectToBus. We cannot change all of them.

Then I came up with an idea, try to fake a session bus on macOS.

To hack and validate, I tried to launch a dbus-daemon using /tmp/dbus-K0TrkEKiEB as address, and then I tried type this in my terminal:
1
launchctl setenv DBUS_LAUNCHD_SESSION_BUS_SOCKET /tmp/dbus-K0TrkEKiEB

Then I launched dbus-monitor --session. It did connect to the bus that I launched.

And then, any QDBusConnection::sessionBus can establish a stable connection to the faked session bus. So components in KFramework can use the same session bus as well.

To implement it in KDE Connect, after starting dbus-daemon, I read the file content, filter the socket address, and call launchctl to set DBUS_LAUNCHD_SESSION_BUS_SOCKET env.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// Set launchctl env
QString privateDBusAddress = KdeConnectConfig::instance()->privateDBusAddress();
QRegularExpressionMatch path;
if (privateDBusAddress.contains(QRegularExpression(
QStringLiteral("path=(?<path>/tmp/dbus-[A-Za-z0-9]+)")
), &path)) {
qCDebug(KDECONNECT_CORE) << "DBus address: " << path.captured(QStringLiteral("path"));
QProcess setLaunchdDBusEnv;
setLaunchdDBusEnv.setProgram(QStringLiteral("launchctl"));
setLaunchdDBusEnv.setArguments({
QStringLiteral("setenv"),
QStringLiteral(KDECONNECT_SESSION_DBUS_LAUNCHD_ENV),
path.captured(QStringLiteral("path"))
});
setLaunchdDBusEnv.start();
setLaunchdDBusEnv.waitForFinished();
} else {
qCDebug(KDECONNECT_CORE) << "Cannot get dbus address";
}

Then everything works!

Possible improvement
  1. Since we can directly use session bus, the redirect from QDBusConnection::sessionBus to QDBusConnection::connectToBus is not necessary anymore. Everyone can connect it in convenience.
  2. Each time we launch kdeconnectd, a new dbus-daemon is launched and the environment in launchctl is overwritten. To improve this, we might detect whether there is already an available dbus-daemon through testing connectivity of returned QDBusConnection::sessionBus. This might be done by a bootstrap script.
  3. It will be really nice if we can have a unified way for all KDE Applications on macOS.
Conclusion

I’m looking forward to a general DBus solution for all KDE applications :)

Improved rendering of mathematical expressions in Cantor

Tuesday 23rd of July 2019 09:59:08 PM
Hello everyone!

In the previous post I mentioned that the render of mathematical expressions in Cantor has bad performance. This heavily and negatively influences user experience. With my recent code changes I addressed this problem and it should be solved now. In this blog post I wand to provide some details on what was done.

First, I want to show some numbers proving the performance improvements. For example, loading of the notebook "Rigid-body transformations in a plane (2D)" - one of the notebooks I’m using for testing - took 15.9 seconds (this number and all other numbers mentioned in the following are average values of 5 consequent measurements). With the new implementation it takes only 4.06 seconds. And this acceleration comes without loosing render quality.

This is a example, how modern render looks like compared with Jupyter renderer (as you can see, Cantor doesn't show images from web in Markdown entries, but I will fix it soon).




I did further measurements by executing all the tests I wrote for the Jupiter import which cover several Jupyter notebooks. Here the results:
  • Without math rendering - 7.75 seconds.
  • New implementation - 14.014 seconds.
  • Old implementation - 41.296 seconds.
To quickly summarize, we get an average of 535% performance improvement. This result depends on the number of available cores and I’ ll explain below why.

To get these results I solved two main problems of math rendering in Cantor.
First, I changed the code for the LaTeX renderer. In the old implementation the rendering process consisted of the following steps:
  1. create TeX document using a page template and the code provided by the user.
  2. run latex executable on the produced TEX file to generate the DVI file.
  3. run dvips executable to convert the DVI file to an EPS file.
  4. convert the produced EPS file to a QImage using Libspectre library.
After these four steps the produced QImage is shown Cantor’s worksheet (QGraphicsScene/View). As you see, the overall chain of steps to get the image out of a mathematical expression is quite long - there are several steps where the time is spent. In total, for a usual mathematical expression these operations take ~500 ms where the first three steps take 300 ms and the last step takes 200 ms. The complexity and the size of the mathematical expressions have a negligible impact on the numbers shown above. Also, the time spent in Cantor for expressions of other types is relatively small. So, for example if you have a notebook with 20 different mathematical expressions and some other entries of other types, Cantor will load the project in ca 20*500ms=10s.

I reduced this chain to three elements by merging the steps two and three. This was achieved by using pdflatex as the LaTeX engine which produces a PDF file directly out of the TEX file. Furthermore, I replaced libspectre library with Poppler pdf rendering library. This brought the overall time down to 330 ms with pdflatex process taking 300 ms and with the rendering in Poppler (converting PDF to QImage) taking only 30 ms. With this, for our example notebook with 20 mathematical expressions mentioned above the rendering take only 6.6 seconds. In this new chain, the LaTeX process is the bottle neck and I’m looking for potential acceleration here but until now I didn’t find any "magic" parameters which would help to reduce this time spent in latex rendering.

Despite this progress, loading of somewhat bigger documents will hurt in Cantor. For example, for a project having 100 formulas openning of the file will take ca. 33 seconds.

The problem here is that the rendering process is a blocking and non-parallelized operation - only one mathematical expression is processed simultanuosly and the UI is blocked for the whole processing time. Obviously, this behaviour is unsatisfactory and under-utilizes the modern multi-core CPUs. So I  decided to run the rendering part in many parallel tasks asynchronously without blocking the main GUI thread. Fortunately, Qt helps here a log with it's classes QThreadPool managing the pool of threads and QRunnable providing an interface to define "tasks" that will be executed in parallel.

Now when openning a project, for every Markdown entry containing mathematical expression, Cantor creates a render task for every expression, sends this task to the thread pool and continues with the processing of entries in the document. Once such a task has finished, the result is shown in Cantor's worksheet. With this a further good performance improvement can be achieved. Clearly, the more cores you have the faster the processing will be. Of course, if you have only a small number of physical threads possible on your computer, you won't notice a huge difference. But still, you should see an improvement compared to the old single-threaded implementation in Cantor.

For a notebook comparable in size to  "Rigid-body transformations in a plane (2D)" project which has 109 mathematical expressions, the loading of the notebook takes a reasonable and acceptable time on my hardware (I have 8 physical cores in the CPU, so that is why the render acceleration is so big). And, thanks to the asynchron processing, the user can interact with the notebook even though the rendering of the mathematical expressions is still in process.

Since my previous post, not only math renderer have changed, there is also a huge change in Markdown support - Cantor finally handles embeded math expressions, like $...$ and $$...$$ in accordance with the Markdown syntax. In the next blog post I'll describe how it works now.

Day 58

Tuesday 23rd of July 2019 03:05:40 PM

Since the last update, I worked on the Khipu interface and created some models to manage the information on the screen. Actually, Khipu is looking like:

The interface is not finished and I’ll leave the design to the end, because there’s a lot to do in the back end now.
The remove button works, but the rename hasn’t yet been implemented.
I made a SpaceModel that manages the spaces, and a PlotModel that manages the information on the black retangle on the menu.
I started to link my code with Analitza, the KDE mathematical library, using their qml components such as Graph2D and Graph3D to show these interactive spaces and the Expression class holds the user input (a mathematical function such as “y = x**2”), and now I’m stuck trying to learn about the AnalitzaPlot library. I need to understand how it works to show functions on these spaces. I’m using KAlgebra Mobile, an another KDE mathematical software, as a reference for my code.

Kate LSP Status – July 22

Monday 22nd of July 2019 08:16:00 PM

After my series of LSP client posts, I got the question: What does this actually do? And why should I like this or help with it?

For the basic question: What the heck is the Language Server Protocol (LSP), I think my first post can help. Or, for more details, just head over to the official what/why/… page.

But easier than to describe why it is nice, I can just show the stuff in action. Below is a video that shows the features that at the moment work with our master branch. It is shown using the build directory of Kate itself.

To get a usable build directory, I build my stuff locally with kdesrc-build, the only extra config I have in the global section of my .kdesrc-buildrc is:

cmake-options -DCMAKE_BUILD_TYPE=RelWithDebInfo -G “Kate - Unix Makefiles” -DCMAKE_EXPORT_COMPILE_COMMANDS=ON

This will auto generate the needed .kateproject files for the Kate project plugin and the compile_commands.json for clangd (the LSP server for C/C++ the plugin uses).

If you manually build your stuff with cmake, you can just add the

-G “Kate - Unix Makefiles” -DCMAKE_EXPORT_COMPILE_COMMANDS=ON

parts to your cmake call. If you use ninja and not make, just use

-G “Kate - Ninja” -DCMAKE_EXPORT_COMPILE_COMMANDS=ON

Then, let’s see what you can do, once you are in a prepared build directory and have a master version of Kate in your PATH.

I hope the quality is acceptable, that is my first try in a long time to do some screen-cast ;)

As you can see, this is already in an usable state at least for C/C++ in combination with clangd.

For details how to build Kate master with it’s plugins, please take a look at this guide.

If you want to start to hack on the plugin, you find it in the kate.git, addons/lspclient.

Feel welcome to show up on kwrite-devel@kde.org and help out! All development discussions regarding this plugin happen there.

If you are already familiar with Phabricator, post some patch directly at KDE’s Phabricator instance.

KDE ISO Image Writer – GSoC Phase 2

Monday 22nd of July 2019 07:00:39 PM

As mentioned in my previous blog post, the new user interface was functional on Windows. However, the application had to be run using root on Linux to be able to write an ISO image to a USB flash drive.

KAuth

The original user interface used KAuth to write the ISO image without having to run the entire application with root privileges. KAuth is a framework that allows to perform privilege elevation on restricted portions of code. In order to run an action with administrator privileges without having to run the entire application as an administrator, an additional binary (KAuth helper) which is included alongside the main application binary will perform the actions that require elevated privileges. This approach allows privileges escalation for specific portions of code without granting elevated privileges for code that does not need them. After integrating the existing KAuth helper into the new user interface, it was able to write ISO images by asking for authorisation when required.

Finalising The User Interface

In addition to implementing the KAuth back-end, I polished the user interface and I implemented additional features such as drag and drop support which allows the user to select an ISO image by simply dropping a file on the application window.

KDevelop 5.4 beta 1 released

Monday 22nd of July 2019 04:30:00 PM

KDevelop 5.4 beta 1 released

We are happy to announce the release of KDevelop 5.4 Beta 1!

5.4 as a new feature version of KDevelop will among other things add some first support for projects using the Meson build system and have the Clang-Tidy support plugin merged as part of built-in plugins. It also brings 11 months of small improvements across the application. Full details will be given in the announcement of the KDevelop 5.4.0 release, which is currently scheduled for in 2 weeks.

Downloads

You can find the Linux AppImage (learn about AppImage) here: KDevelop 5.4 beta 1 AppImage (64-bit)
Download the file and make it executable (chmod +x KDevelop-5.3.80-x86_64.AppImage), then run it (./KDevelop-5.3.80-x86_64.AppImage).

The source code can be found here: KDevelop 5.4 beta 1 source code

Windows installers are currently not offered, we are looking for someone interested to take care of that.

kossebau Mon, 2019/07/22 - 18:30 Category News Tags release

KDE Connect sprint 2019

Monday 22nd of July 2019 01:31:00 PM

This blog is about KDE Connect, a project to communicate across all your devices. For example, with KDE Connect you can receive your phone notifications on your desktop computer, control music playing on your desktop from your phone, or use your phone as a remote control for your desktop.

From friday the 19th to sunday the 21st, we had the KDE Connect sprint. It's always a nice opportunity to meet the others working on KDE Connect, since we usually only talk to each other online.

Lots of activity at the KDE Connect sprint! (Also some KWin & Onboarding sprinters)

On arrival on friday, we immediately got the first issue to fix: the Wi-Fi at the sprint blocks UDP broadcasts. That means KDE Connect couldn't find any device. Adding the IP-addresses makes it work again, but it's a good reminder to actually improve this situation.

On friday and saturday, we had a lot of discussions about projects to improve KDE Connect architecturally. Those are:

  • Not keeping TCP connections open to all reachable devices (really a problem when there are like a 100 devices)
  • Making KDE Connect better on Wayland
  • Google locking down Android more and more, and how to handle that
  • Using mDNS (a.k.a. Avahi/Zeroconf/Bonjour) to discover KDE Connect devices
  • Improving the buggy bluetooth backend

Specifically on bluetooth: it is very hard to make it work on all devices. We're slowly making some progress, though.

Besides that, we worked on lots of smaller features and lots of bugfixes, of course. For example, I changed the Android app to not create a thread for everything single thing we send. This matters a lot for the mousepad plugin, which sends many small packets.

In short, the sprint was great! Thanks SUSE for hosting this sprint!

Python binding for Kuesa

Monday 22nd of July 2019 12:40:56 PM

KUESA™ is a Qt module designed to load, render and manipulate glTF 2.0 models in applications using Qt 3D.

Kuesa provides a C++ and a QML API which makes it easy to do things like triggering animations contained in the glTF files, finding camera details defined by the designer, etc.

It is a great tool so that designers and developers can share glTF based 3D assets.

With the upcoming release of Kuesa 1.1, we are introducing a python binding for Kuesa. This provides a simple yet powerful way for programmers to integrate glTF content in their python applications with just a few lines of code.

Building Kuesa’s Python binding

The first step is, of course, to build and install Kuesa itself. Instruction are available here, it’s a simple process. Kuesa is a Qt module so it typically installs inside Qt’s standard folders.

Next step is to install Python for Qt, AKA PySide. Note that you must install it for the same version of Qt that you compiled Kuesa with.

If you’ve built your own version of Qt, fear not. Building the python binding for that is fairly easy and quick.

In all cases, we would recommend you use a python virtual environment. This will let you install several versions of the Qt bindings.

Once you’ve installed Python for Qt, you’re ready to build the Kuesa binding.

Building bindings for C++ libraries is a relatively simple process which uses several things:

  • a header file which includes all the C++ headers for the files you want to build bindings for
  • an xml file which lists all the classes and provides helper information for the binding generator. This contains details about enums, can help hide C++ methods or properties, etc
  • a binding generator, Shiboken which parses the C++ headers and generates C++ code that implements the binding

The code for the binding, inside Kuesa’s src folder, contains a CMake project file which takes care of all the details.

So assuming the python virtual env is active and the right version of Qt is in the path, building the binding should be as easy as:

cd .../src/python mkdir build cd build cmake .. make make install

Note: the version of Python for Qt which ships with the 5.12 series is incomplete in it’s coverage of Qt 3D. In particular, some useful classes were missed in the animation module, like QClock which is useful to control the speed and the direction of the animation. We have submitted a patch for PySide which fixes this and it was merged in 5.13.

Your first Kuesa application with Python

Kuesa ships with a simple python application that demonstrates the use of the binding.

The application starts by importing the various required modules:

from PySide2.QtCore import(Property, QObject, QPropertyAnimation, Signal, Slot) from PySide2.QtGui import (QGuiApplication, QMatrix4x4, QQuaternion, QVector3D, QWindow) from PySide2.QtWidgets import (QWidget, QVBoxLayout, QHBoxLayout, QCheckBox, QPushButton, QApplication) from PySide2.Qt3DCore import (Qt3DCore) from PySide2.Qt3DRender import (Qt3DRender) from PySide2.Qt3DExtras import (Qt3DExtras) from PySide2.Qt3DAnimation import (Qt3DAnimation) from Kuesa import (Kuesa)

 

The scene graph will need a SceneEntity and a GLTF2Import node to load the glTF content.

self.rootEntity = Kuesa.SceneEntity() self.rootEntity.loadingDone.connect(self.onLoadingDone) self.gltfImporter = Kuesa.GLTF2Importer(self.rootEntity) self.gltfImporter.setSceneEntity(self.rootEntity) self.gltfImporter.setSource("file://"+ wd +"/../assets/models/car/DodgeViper-draco.gltf")

It must also use the frame graph provided by Kuesa. This is needed for the custom materials that Kuesa uses for PBR rendering. It also provides performance improvements such as early z filling, and additional post-processing effects such as blurring or depth of field.

self.fg = Kuesa.ForwardRenderer() self.fg.setCamera(self.camera()) self.fg.setClearColor("white") self.setActiveFrameGraph(self.fg)

Note: in this case, the base class is an instance of Qt3DWindow from Qt 3D’s extras module.

When loading is completed, the content of the glTF file can be accessed using the various collections that are available on the SceneEntity class. For example, you can access the animations created by the designer and baked into the glTF file. These can they be controlled from python!

def onLoadingDone(self): self.hoodClock = Qt3DAnimation.QClock(self.rootEntity) self.hoodAnimation = Kuesa.AnimationPlayer(self.rootEntity) self.hoodAnimation.setClock(self.hoodClock) self.hoodAnimation.setSceneEntity(self.rootEntity) self.hoodAnimation.setClip("HoodAction")

Referencing glTF content is done using the names assigned by the designer.

From there on, animations can be started and stopped by accessing the animation player object. Speed and direction can be changed using the clock created above.

Finally, the application embeds the Qt3DWindow inside a widget based UI (using a window container widget) and creates a simple UI to control the animations.

 

Download KUESA™ here.

The post Python binding for Kuesa appeared first on KDAB.

Interview with Manga Tengu

Monday 22nd of July 2019 07:23:17 AM

Could you tell us something about yourself?

Hi I’m Nour, better known as “manga tengu”.

I’ve loved drawing since I was a kid. I think it is because pen and paper have always been the most widespread toys for children. When I got to choose what to study I went for architecture as it was a way to combine science and art.

I’ve always been hacking my computer, which led me to get interested in open source.

Do you paint professionally, as a hobby artist, or both?

I paint as a professional hobbyist. Which means it’s a hobby but I put maximum rigor and commitment into it. Professionally I’ve been teaching Krita and digital painting at isart since November 2018.

I’ve made lots of architectural illustration which actually led me to get interested in digital as a painting medium.

What genre(s) do you work in?

I started with caricature as a child. After admitting to myself that I wanted to draw manga, I made hundreds of pages of manga.

After deciding to enter the color realm I’ve been into … drawing manga as digital paintings. Then I rediscovered impressionism, realism…

I’m always piling something on top of what I already have.

Whose work inspires you most — who are your role models as an artist?

I made a rule to always be inspired by several artists at the same time. Getting too focused on a single one appears to have some dangerous effects on me. Actually Vladimir Volegov is the one that moves me the most.

How and when did you get to try digital painting for the first time?

In 2005, I absolutely hated it. It was on a Wacom Graphire 4 small format. At the time it looked so expensive to me… I tried it something like a few hours and left it. Then got back at it a few hours every year or so…Didn’t really get into it before 2011.

What makes you choose digital over traditional painting?

1. The biggest reason for me was that even though a tablet doesn’t look cheap, it’s way cheaper than fine art material. It puts everybody on the same level. If I have a tablet I can have a wider range of color than the finest pigments could give.

2. Speed and flexibility. Depending on how you go at it, your paint can behave as if it was wet as long as you want, or be instantly dry, then wet again… There is no time mixing, cleaning et cetera.

3. You can focus on your art: If the elements you’ve been freed from in point 1 and 2 are no longer, then what can make you stand out? Your talent, your experience, your ideas…

How did you find out about Krita?

I found a David Revoy video about Mypaint. I don’t remember if he suggested Krita or not back then but I thought hey, it seems people in the open source world have more than Blender and Linux for me! Then I went on YouTube and was really impressed with Ramon Miranda’s symmetric robot. I found it so cool I needed to try Krita out.

What was your first impression?

I realized it could do all the things I favored Manga Studio over Photoshop for.

What do you love about Krita?

So many things …

1. At the core, it is definitely meant for drawing and painting. You can feel it in the features and their implementation.

2. I can map shortcuts to any key, not some stupid combination of ctrl button or function buttons. This is very important for lefties. I end up with a very efficient workflow.

3. It’s light and runs on Linux. So I could restore some old computers nobody wanted because “Windows takes 15 minutes to start” and make them into decent working stations.

4. You can talk to the devs directly. It’s not like some gigantic monolith you can only undergo. In fact it feels like a close community.

5. All that for free, seriously?

What do you think needs improvement in Krita? Is there anything that really annoys you?

The Mac version needs some steroids. The resources, bundles, shortcuts import export (I heard they were undergoing some pimping … I have great hopes). For now when I go on a new computer I just override the Krita resource folder, but that’s not enough to bring everything back into place.

What sets Krita apart from the other tools that you use?

The brush engines, the way it is meant for painting…

If you had to pick one favorite of all your work done in Krita so far, what would it be, and why?

That drawing of the Chinese lady in the woods. I feel this is when I stopped focussing on making stupidly smooth shading and begun working on my brushwork.

What techniques and brushes did you use in it?

All my brushes are modifications of brushes bundled with Krita:
g)_dry_bristles
i)_wet_bristles_rough
b)_basic-2_opacity

Where can people see more of your work?

https://www.youtube.com/channel/UCbRzccyl4HujGtujH2PjrwA
https://www.artstation.com/mangatengu
https://twitter.com/MangaTengu
https://www.instagram.com/nour.digital.painting
https://www.twitch.tv/mangatengu

Anything else you’d like to share?

Keep it fun when you paint! If you don’t enjoy it, you need to change it.

Plasma Mobile at Plasma Sprint Valencia

Monday 22nd of July 2019 07:20:00 AM

In June month we gathered in Slimbook’s offices to work on Plasma. Along with Plasma developers, we were also joined by KDE Usability and Productivity team.

During the sprint I mostly worked to create up-to-date image for Plasma Mobile, as from last few weeks Plasma Mobile image was quite out-of-date and needed update.

Some of the bugfixes we did includes,

Apart from Plasma Mobile, I worked on general Plasma bugfixes as well,

If you want to know overall progress made in the Plasma + Usability & Productivity sprint, then you can take a look at dot story for more detailed sprint report.

Thanks to Slimbook for hosting us and KDE e.V. for sponsoring my travel!

Also, I am going to Akademy 2019, and talking with Marco Martin about Plasma on embedded devices.

Somewhat Usable

Monday 22nd of July 2019 06:12:29 AM

Adding a feature by yourself is a lot satisfying than requesting someone to add that for you, cause now you are both the producer and the consumer. But to be honest, I never thought I would be the one implementing the Magnetic Lasso for Krita when I requested it 4 years back, leave the fact that I even getting paid for doing so.

Month 2 in making the Titler – GSoC ’19

Monday 22nd of July 2019 04:00:24 AM

Hi! It’s been a while

And sorry for that, I had planned to update last week but couldn’t do so as I had few health issues but now I’m alright.

The QML MLT producer – the progress so far…

From my understanding so far (forgive me for any mistakes that I might make – it’s a different codebase and different concepts – I wholeheartedly welcome corrections and suggestions) the whole producer boils down to two parts – the actual producer code (which is in C and which is the thing which does the ‘producer stuff’) and the wrapper code (which ‘wraps’, supplements and does the actual rendering part of the QML frames). The wrapper files are responsible for mainly rendering the QML templates that are passed to it and make it available for the actual producer to use. And consequently, most of the work is to be done in the wrapper files, as the producer in itself doesn’t change much as it will still do the same things like the existing XML producer (producer_kdenlivetitle.c) – such as loading a file, generating a frame, calling rendering methods from the wrapper files.

So let’s see what work has been done. Starting with the new producer file in mlt/src/modules/qt/producer_qml.c

void read_qml(mlt_properties properties)

As the name suggests, it opens a “resource” file and stores the QML file in the global mlt_properties which is passed.

static int producer_get_image( mlt_frame frame, uint8_t **buffer, mlt_image_format *format, int *width, int *height, int writable )

This method takes in a frame and makes use of the wrapper file – it calls the method which does the rendering part in the wrapper files ( renderKdenliveTitle() ) and sets the rendered image using mlt_frame_set_image to the frame that was passed.

static int producer_get_frame( mlt_producer producer, mlt_frame_ptr frame, int index )

This method generates a frame, calls producer_get_image() and sets a ready rendered frame for the producer, and prepares for the next frame.

The wrapper file has the following methods –

void loadQml( producer_ktitle_qml self, const char *templateQml )

What this method does is – it loads a QML file which is a pointer to a char array and does a bunch of stuff – it checks if it is valid, initialises few properties using mlt_properties_set() methods (width and height). The next method we have is –

void renderKdenliveTitle( producer_ktitle_qml self, mlt_frame frame, mlt_image_format format, int width, int height, double position, int force_refresh )

renderKdenliveTitle() does the rendering part – given a mlt_frame, format and its parameters. And here is where I use QmlRenderer – my last month’s work – it renders QML. I refactored the code a bit to return a rendered QImage in the library. I make use of the renderSingleFrame() method which renders a QML frame for a given position (time)

The programming part in itself wasn’t difficult (although it is far, far from a complete producer – there are a lot of memory leaks right now), understanding how all of it works together in a piece is what took the most effort – in fact it took me a little more than week just to understand and comprehend working of the  producer codebase!

For most of the part, I believe 80% of the producer work is done. The plan is to get a working, solid producer by next week. Although the current code is still far from a ready producer although the whole structure is set and most of the refactoring that had to be be done in the QmlRenderer library in order to accommodate the producer methods is done.

Also, the build system for the QmlRenderer lib was revamped, it’s a clean build system (thanks to Vincent), so for building, all you need to do is clone the repository and do this –

mkdir build cd build qmake -r .. make cd bin ./QmlRender -o /path/to/output/directory -i /path/to/input/QML/file

Neat

You can view the code for the QML MLT producer here.

More in Tux Machines

Late Coverage of Confidential Computing Consortium

  • Microsoft Partners With Google, Intel, And Others To Form Data Protection Consortium

    The software maker joined Google Cloud, Intel, IBM, Alibaba, Arm, Baidu, Red Hat, Swisscom, and Tencent to establish the Confidential Computing Consortium, a group committed to providing better private data protection, promoting the use of confidential computing, and advancing open source standards among members of the technology community.

  • #OSSUMMIT: Confidential Computing Consortium Takes Shape to Enable Secure Collaboration

    At the Open Source Summit in San Diego, California on August 21, the Linux Foundation announced the formation of the Confidential Computing Consortium. Confidential computing is an approach using encrypted data that enables organizations to share and collaborate, while still maintaining privacy. Among the initial backers of the effort are Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. “The context of confidential computing is that we can actually use the data encrypted while programs are working on it,” John Gossman, distinguished engineer at Microsoft, said during a keynote presentation announcing the new effort. Initially there are three projects that are part of the Confidential Computing Consortium, with an expectation that more will be added over time. Microsoft has contributed its Open Enclave SDK, Red Hat is contributing the Enarx project for Trusted Execution Environments and Intel is contributing its Software Guard Extensions (SGX) software development kit. Lorie Wigle, general manager, platform security product management at Intel, explained that Intel has had a capability built into some of its processors called software guard which essentially provides a hardware-based capability for protecting an area of memory.

Graphics: Mesa Radeon Vulkan Driver and SPIR-V Support For OpenGL 4.6

  • Mesa Radeon Vulkan Driver Sees ~30% Performance Boost For APUs

    Mesa's RADV Radeon Vulkan driver just saw a big performance optimization land to benefit APUs like Raven Ridge and Picasso, simply systems with no dedicated video memory. The change by Feral's Alex Smith puts the uncached GTT type at a higher index than the visible vRAM type for these configurations without dedicated vRAM, namely APUs.

  • Intel Iris Gallium3D Is Close With SPIR-V Support For OpenGL 4.6

    This week saw OpenGL 4.6 support finally merged for Intel's i965 Mesa driver and will be part of the upcoming Mesa 19.2 release. Not landed yet but coming soon is the newer Intel "Iris" Gallium3D driver also seeing OpenGL 4.6 support. Iris Gallium3D has been at OpenGL 4.5 support and is quite near as well with its OpenGL 4.6 support thanks to the shared NIR support and more with the rest of the Intel open-source graphics stack. Though it's looking less likely that OpenGL 4.6 support would be back-ported to Mesa 19.2 for Iris, but we'll see.

The GPD MicroPC in 3 Minutes [Video Review]

In it I tackle the GPD MicroPC with Ubuntu MATE 19.10. I touch on the same points made in my full text review, but with the added bonus of moving images to illustrate my points, rather than words. Read more Also: WiringPi - Deprecated

today's howtos