Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 4 hours 59 min ago

A Week in Valencia – the 2019 Plasma/Usability & Productivity Sprint

Sunday 30th of June 2019 09:59:14 PM

For those that don't know me, I'm relatively new to KDE and spend most of my time doing VDG (Visual Design Group) stuff . The Plasma/Usability & Productivity sprint in Valencia which took place from June 19th to June 26th, was my first ever KDE sprint. Although we were all working together, I was formally...... Continue Reading →

Smart Pointers in Qt Projects

Sunday 30th of June 2019 03:30:22 PM

Actually, a smart pointer is quite simple: It is an object that manages another object by a certain strategy and cleans up memory, when the managed object is not needed anymore. The most important types of smart pointers are:

  • A unique pointer that models access to an object that is exclusively maintained by someone. The object is destroyed and its memory is freed then the managing instance destroys the unique pointer. Typical examples are std::unique_ptr or QScopedPointer.
  • A shared pointer is a reference counting pointer that models the shared ownership of an object, which is managed by several managing instances. If all managing instances release their partly ownership, the managed object is automatically destroyed. Typical examples are std::shared_ptr or QSharedPointer.
  • A weak pointer is a pointer to an object that is managed by someone else. The important use case here is to be able to ask if the object is still alive and can be accessed. One example is std::weak_ptr that can point to a std::shared_ptr managed object and can be used to check if the object managed by the shared pointer still exists and it can be used to obtain a shared pointer to access the managed object. Another example is QPointer, which is a different kind of weak pointer and can be used to check if a QObject still exists before accessing it.

For all these pointers one should always keep one rule in mind: NEVER EVER destroy the managed objects by hand, because the managed object must only be managed by the smart pointer object. Otherwise, how could the smart pointer still know if an object can still be accessed?! E.g. the following code would directly lead to a crash because of a double delete:

{ auto foo = std::make_unique<Foo>(); delete foo.get(); } // crash because of double delete when foo gets out of scope

This problem is obvious, now let’s look at the less obvious problems one might encounter when using smart pointers with Qt.

QObject Hierarchies

QObject objects and instances of QObject derived classes can have a parent object set, which ensures that child objects get destroyed whenever the parent is destroyed. E.g., think about a QWidget based dialog where all elements of the dialog have the QDialog as parent and get destroyed when the dialog is destroyed. However, when looking at smart pointers there are two problems that we must consider:

1. Smart Pointer Managed objects must not have a QObject parent

It’s as simple as the paragraph’s headline: When setting a QObject parent to an object that is managed by a smart pointer, Qt’s cleanup mechanism destroys your precious object whenever the parent is destroyed. You might be lucky and destroy your smart pointer always before the QObject parent is destroyed (and nothing bad will happen), but future developers or user of your API might not do it.

2. Smart Pointers per default call delete and not deleteLater

Calling delete on a QObject that actively participates in the event loop is dangerous and might lead to a crash. So, do not do it! – However, all smart pointers that I am aware of call “delete” to destroy the managed object. So, you actively have to take care of this problem by specifying a custom cleanup handler/deleter function. For QScopedPointer there already exists “QScopedPointerDeleteLater” as a predefined cleanup handler that you can specify. But you can do the same for std::unique_ptr, std::shared_ptr and QSharedPointer by just defining a custom deleter function and specifying it when creating the smart pointer.

Wrestling for Object Ownership with the QQmlEngine

Besides the QObject ownerships there is another, more subtle problem that one should be aware of when injecting objects into the QQmlEngine. When using QtQuick in an application, often there is the need to inject objects into the engine (I will not go into detail here, but for further reading see https://doc.qt.io/qt-5/qtqml-cppintegration-topic.html). The important important fact one should be aware of is that at this point there is a heuristic that decides whether the QML engine and its garbage collector assumes ownership of the injected objects or if the ownership is assumed to be on C++ side (thus managed by you and your smart pointers).

The general rule for the heuristic is named in the QObjectOwnership enum. Here, make sure that you note the difference between QObjects returned via a Q_PROPERTY property and via a call of a Q_INVOKABLE methods. Moreover, note that the description there misses the special case of when an Object has a QObject parent, then also the CppOwnership is assumed. For a detailed discussion of the issues there (which might show you a surprisingly hard to understand stack trace coming from the depths of the QML engine), I suggest reading this blog post.

Summing up the QML part: When you are using a smart pointer, you will hopefully not set any QObject parent (which automatically would have told the QML engine not to take ownership…). Thus, when making the object available in the QML engine, you must be very much aware about the way you are using to put the object into the engine and if needed, you must call the QQmlEngine::setObjectOwnership() static method to mark your objects specifically that they are handled by you (otherwise, bad things will happen).

Conclusion

Despite of the issues above, I very much favor the use of smart pointers. Actually, I am constantly switching to smart pointers for all projects I am managing or contributing. However, one must be a little bit careful and conscious about the side effects when using them in Qt-based projects. Even if they bring you a much simpler memory management, they do not relieve you from the need to understand how memory is managed in your application.

PS: I plan to continue with a post about how one could avoid those issues with the QML integration on an architectural level, soon; but so much for now, the post is already too long

May/June in KDE PIM

Sunday 30th of June 2019 07:45:00 AM

Following Dan it’s my turn this time to provide you with an overview on what has happened around Kontact in the past two months. With more than 850 commits by 22 people in the KDE PIM repositories this can barely scratch the surface though.

KMail

Around email a particular focus area has been security and privacy:

  • Sandro worked on further hardening KMail against so-called “decryption oracle” attacks (bug 404698). That’s an attack where intercepted encrypted message parts are carefully embedded into another email to trick the intended receiver in accidentally decrypting them while replying to the attacker’s email.
  • Jonathan Marten added more fine-grained control over proxy settings for IMAP connections.
  • André improved the key selection and key approval workflow for OpenGPG and S/MIME.

Laurent also continued the work on a number of new productivity features in the email composer:

  • Unicode color emoji support in the email composer.
Color emoji selector. Grammalecte reporting a grammar error.
  • Markdown support in the email composer, allowing to edit HTML email content in markdown syntax.
Markdown editing with preview.

This isn’t all of course, there’s plenty of fixes and improvements all around KMail:

  • Albert fixed fixed an infinite loop when the message list threading cache is corrupted.
  • David fixed a Kontact crash on logout (bug 404881).
  • Laurent fixed access to more than the first message when previewing MBox files (bug 406167).
  • The itinerary extraction plugin benefited from a number of improvements in the extractor engine, see this post for details.

And fixing papercuts and general polishing wasn’t forgotten either (most changes by Laurent):

  • Fix cursor jumping into the Bcc field in new email (bug 407967).
  • Fix opening the New Mail Notifier agent configuration.
  • Fix settings window being too small (bug 407143).
  • Fix account wizard not reacting to Alt-F4 (bug 388815).
  • Fix popup position in message view with a zoom level other than 100%.
  • Fix importing attached vCard files (bug 390900).
  • Add keyboard shortcut for locking/unlocking the search mode.
  • David fixed interaction issues with the status bar progress overlay.
KOrganizer

Around calendaring, most work has been related to the effort of making KCalCore part of KDE Frameworks 5, something that particularly benefits developers using KCalCore outside of KDE PIM. The changes to KCalCore also aimed at making it easier to use from QML, by turning more data types into implicitly shared value types with Q_GADGET annotations. This work should come to a conclusion soon, so we can continue the KF5 review process.

Of course this isn’t all that happened around calendaring, there were a few noteworthy fixes for users too:

  • Fixed an infinite loop in the task model in case of duplicate UIDs.
  • Improved visibilities of timeline/Gantt views in KOrganizer with dark color schemes.
  • Damien Caliste fixed encoding of 0 delay durations in the iCal format.
KAddressBook

Like calendaring, contact handling also saw a number of changes related to making KContacts part of KDE Frameworks 5. Reviewing the code using KContacts lead to a number of repeated patterns being upstreamed, and to streamlining the contact handling code to make it easier to maintain. As a side-effect, a number of issues around the KContacts/Grantlee integration were fixed, solving for example limitations regarding the localization of contact display and contact printing.

There is one more step required to complete the KContacts preparation for KDE Frameworks 5, the move from legacy custom vCard entries to the IMPP element for messaging addresses.

Akregator

Akregator also received a few fixes:

  • Heiko Becker fixed associating notifications with the application.
  • Wolfgang Bauer fixed a crash with Qt 5.12 (bug 371511).
  • Laurent fixed comment font size issues on high DPI screens (bug 398516), and display of feed comments in the feed properties dialog (bug 408126).
Common Infrastructure

The probably most important change in the past two months happened in Akonadi: Dan implemented an automatic recovery path for the dreaded “Multiple Merge Candidate” error (bug 338658). This is an error condition the Akonadi database state can end up in for still unknown reasons, and that so far blocked successful IMAP synchronization. Akonadi is now able to automatically recover from this state and with the next synchronization with the IMAP server put itself back into a consistent state.

This isn’t all though:

  • Another important fix was raising the size limit for remote identifiers to 1024 characters (bug 394839), also by Dan.
  • David fixed a number of memory leaks.
  • Filipe Azevedo fixed a few macOS specific deployment issues.
  • Dan implemented improvements for using PostreSQL as a backend for Akonadi.

The backend connectors also saw some work:

  • For the Kolab resource a memory management issues got fixed, and it was ported away from KDELibs4Support legacy code.
  • David Jarvie fixed a few configuration related issues in the KAlarm resource.
  • Laurent fixed configuration issues in the MBox resources.

The pimdataexporter utility that allows to import/export the entire set of KDE PIM settings and associated data has received a large number of changes too, with Laurent fixing various import/export issues and improving the consistency and wording in the UI.

Help us make Kontact even better!

Take a look at some of the junior jobs that we have! They are simple, mostly programming tasks that don’t require any deep knowledge or understanding of Kontact, so anyone can work on them. Feel free to pick any task from the list and reach out to us! We’ll be happy to guide you and answer all your questions. Read more here…

KDE Usability & Productivity: Week 77

Sunday 30th of June 2019 05:11:17 AM

We’re up to week 77 in KDE’s Usability & Productivity initiative! This week’s report encompasses the latter half of the Usability & Productivity sprint. Quite a lot of great work got done, and two features I’m particularly excited about are in progress with patches submitted and under review: image annotation support in Spectacle, and customizable sort ordering for wallpaper slideshows. These are not done yet, but should be soon! Meanwhile, check out what’s already landed:

New Features Bugfixes & Performance Improvements User Interface Improvements

Pretty freakin’ sweet, huh?! It was a great development sprint and I’m really happy with how it went. I’ll be writing another more in-depth article about it, so stay tuned.

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

[GSoC – 3] Achieving consistency between SDDM and Plasma

Saturday 29th of June 2019 03:41:20 PM

Previously: 1st GSoC post 2nd GSoC post With the first phase of Google Summer of Code over it's high time some substantial progress on achieving the main goal of the project was presented. Since the last post, there's two things that have been done. First, Plasma is now going to be following upstream advice on...... Continue Reading →

Latte bug fix release v0.8.9

Friday 28th of June 2019 11:43:48 AM

Welcome Latte Dock v0.8.9  the LAST stable release for v0.8 branch!


Go get  v0.8.9   from, download.kde.orgor  store.kde.org*
-----* archive has been signed with gpg key: 325E 97C3 2E60 1F5D 4EAD CF3A 5599 9050 A2D9 110E
Fixes:
  • fix: show notifications applet when present in Latte (for Plasma >= 5.16)

Latte v0.9:

For those following Latte news, in July first beta release of v0.9 branch will land and if everything goes on schedule v0.9 will replace v0.8 during first days of August as the officially supported stable version. Requirements for v0.9 are the same with v0.8:
Minimum requirements:
  • Qt >= 5.9
  • Plasma >=5.12
Proposed requirements:
  • Qt >= 5.12
  • Plasma >=5.15

Community Help:
Latte v0.9 introduces two new APIs that can be used from developers in order to leverage the full potential of the new version.
First API can be used from plasma applets to exchange information with Latte from qml code. This way we can have applets that work just fine with plasma panels and at the same time leveraging the maximum of capabilities from Latte docks/panels. My Window Applets are already using it in their latest versions.
Second API is related to new standalone Latte indicators and how they can be implemented from developers.
In order for these APIs to reach developers I would like these information to be present at kde techbase. I have already created a word style document for them but you would sorry me that I do not have the time to upload them in markup language in the referenced page. If you think you can help or take up this task please contact me through the relevant bug report in kde bug tracker . If no community interest appears, I will just upload the 15 pages pdf  somewhere on the net and forward all the developers interested at that link.

Donations:

You can find Latte at Liberapay if you want to support,    

or you can split your donation between my active projects in kde store.

Qt Creator 4.10 Beta2 released

Friday 28th of June 2019 10:38:48 AM

We are happy to announce the release of Qt Creator 4.10 Beta2 !

Most notably we fixed a regression in the signing options for iOS devices, and that the “Build Android APK” step from existing Android projects was not restored.
As always you find more details in our change log.

Get Qt Creator 4.10 Beta2

The opensource version is available on the Qt download page under “Pre-releases”, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.10 Beta2 is also available under Preview > Qt Creator 4.10.0-beta2 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.10 Beta2 released appeared first on Qt Blog.

How to comply with the upcoming requirements in Google Play

Friday 28th of June 2019 07:55:14 AM

Starting on August 1st, Google Play will no longer accept new applications or application updates without a 64-bit version (unless of course there is no native code at all). For Qt users, this means you have to build an additional APK that contains the 64-bit binaries.

Qt has shipped 64-bit binaries for Android since Qt 5.12.0, so complying with the new requirement is technically no big deal. But after discussing with users, I see that it is not clear to everyone exactly how to set up an app in Google Play that supports multiple architectures at once.

This call for help, combined with the fact that I am currently setting up a fresh Windows work station, made for a golden opportunity to look at Qt for Android app development in general. In this blog, I will start with a clean slate and show how to get started on Android, as well as how to publish an app that complies with the Google Play requirements.

I will

  • guide you through the installation steps needed to get a working environment,
  • describe the process of building an application for multiple architectures,
  • and show you how to upload your binaries to Google Play.

The first few parts might be familiar to many of you, so if you get bored and want to hear about the main topic, feel free to skip right to Step 4.

A note about SDK versions

The Android SDK is itself under heavy development, and quite often it isn’t backwards compatible, causing problems with our integration in Qt. We react as quickly as we can to issues that arise from changes or regressions in the SDK, but a general rule of thumb is to wait before you upgrade to the latest and greatest versions of the Android tools until we have had a chance to adapt to incompatibilities in Qt.

While there have been some issues on other platforms as well, the majority of the problems we have seen have been on Windows. So if you are on this host system, be extra aware to check for known good versions before setting up your environment.

We are currently recommending the use of the following tools together with Qt 5.13.0:

  • Android build tools version 28
  • Android NDK r19
  • Java Development Kit 8

If you do bump into some problems, please make sure to check our known issues page to see if there is any updated information.

Now for the details on where and how to get the right versions of everything.

Step 1: Installing the JDK

Android is primarily a Java-based platform, and while you can write your Qt applications entirely in C++ and/or QML, you will need the Java Development Kit in order to compile the files that make the integration possible.

Note that there is an incompatibility between Android’s SDK Manager tool and the later versions of Oracle’s JDK, making the latest JDK versions unusable together with the Android environment. To work around this, we recommend that you download JDK version 8 for use with Android.

You may use the official binaries from Oracle, or an alternative, such as the AdoptOpenJDK project.

Download and run the installer and install it in the default location.

Step 2: Setting up the Android environment

The second step is getting the actual Android development environment. Start by downloading and installing Android Studio. Scroll past the different “beta” and “canary” releases, and you will find the latest stable release.

Once the Android Studio has been installed, you can use this to install the “SDK Platform” for you. This is the actual collection of Java classes for a particular Android distribution. When you start Android Studio the first time, it should prompt you to install the SDK Platform. You can safely use the latest version of the SDK, platform 29, which is the suggested default.

In addition to the SDK, we also need to install the NDK. This is the development kit used for cross-compiling your C++ code to run on Android. As mentioned above, we will use Android NDK r19c and not the latest release, since there are issues with Android NDK r20 causing compilation errors. The issue will be been addressed in Qt 5.13.1 and Qt 5.12.5, so when you start using those, then upgrading to Android NDK r20 is possible.

And as a final step, we need to make sure that we are using version 28.0.3 of the Android build tools rather than the latest version. Note that this is only an issue on Windows hosts.

From the starting dialog box of Android Studio, click on Configure and then select SDK Manager. Go to the SDK Tools tab and make sure Show Package Details is checked. Under the Android build tools, make sure you deselect 29.0.0 and select 28.0.3 instead.

This will uninstall the non-functioning version of the build tools and install the older one. Click Apply to start the process, and when it is done you will have installed a functioning Android environment.

Step 3: Install Qt

For this guide, we will be using Qt 5.13.0. If you haven’t already, start by downloading the online installer tool from your Qt Account.

When you run the installer, make sure you select the arm64-v8a and armv7a target architectures. These are technical names for, respectively, the 64-bit and 32-bit versions of the ARM family of processors, which is the most commonly used processors on Android devices.

Note: For this example in particular, we will also need Qt Purchasing, since it contains the application I am planning to use as demonstration. This can also be selected from the same list.

When Qt is finished installing, start Qt Creator and open the Options. Under Devices, select the Android tab and select the directories where you installed the different packages in the previous steps.

If everything is set up correctly, Qt Creator will show a green check mark, and you will be ready to do Android development with Qt.

Step 4: Setting up project in Qt Creator

For this example, I will use the Qt Hangman example. This is a small example we made to show how to implement in-app purchases in a cross-platform way.

First we open the example in Qt Creator, which can be done from the Welcome screen. Once it has been opened, Qt Creator will ask us to select which Qt versions we want to use for building it.

Select both the 64-bit and 32-bit versions of Qt and click Configure Project.

In order to comply with the additional requirements in Google Play, we want to create two APK packages: One for 32-bit devices and one for 64-bit devices. We need to configure each of these separately.

This screenshot shows an example setup for the 32-bit build. Important things to notice here:

  • Use a different shadow build directory for each of the builds.
  • Make sure you select the Release configuration.
  • You should also tick the Sign package checkbox to sign your package, otherwise the Google Play store will reject it.

With the exception of the build directory, the setup for the 64-bit build should be the same. Select the 64-bit kit on the left-hand side and make the equivalent adjustments there.

Step 5: Preparing the manifest

In addition, the two packages will need identical AndroidManifest.xml files, except for one detail: The version code of the two has to differ. The version code can be pretty much anything you choose, as long as you keep in mind that when an APK is installed on a device from the store, it will select the APK with the highest version code. As Qt user Fabien Chéreau poined out in a comment to a bug report, you therefore typically want to set the version code of the 64-bit version to be higher than for the 32-bit version, so that a device which supports both will prefer the 64-bit one.

As Felix Barz pointed out in the same thread, this can be automated in the .pro file of the project. Here is my slightly modified version of his code:

defineReplace(droidVersionCode) { segments = $$split(1, ".") for (segment, segments): vCode = "$$first(vCode)$$format_number($$segment, width=3 zeropad)" contains(ANDROID_TARGET_ARCH, arm64-v8a): \ suffix = 1 else:contains(ANDROID_TARGET_ARCH, armeabi-v7a): \ suffix = 0 # add more cases as needed return($$first(vCode)$$first(suffix)) } VERSION = 1.2.3 ANDROID_VERSION_NAME = $$VERSION ANDROID_VERSION_CODE = $$droidVersionCode($$ANDROID_VERSION_NAME)

This neat trick (thanks, Felix!) will convert the application’s VERSION to an integer and append a new digit, on the least significant end, to signify the architecture. So for version 1.2.3 for instance, the version code will be 0010020030 for the 32-bit package and 0010020031 for the 64-bit one.

When you generate an AndroidManifest.xml using the button under Build APK in the project settings, this will automatically pick up this version code from the project. Once you have done that and edited the manifest to have your application’s package name and title, the final step is to build the package: First you do a build with one of the two kits, and then you must activate the other kit and do the build again.

When you are done, you will have two releasable APK packages, one in each of the build directories you set up earlier. Relative to the build directory, the package will be in android-build\build\outputs\apk\release.

Note that for a more efficient setup, you will probably want to automate this process. This is also quite possible, since all the tools used by Qt Creator can be run from the command line. Take a look at the androiddeployqt documentation for more information.

Step 6: Publish the application in Google Play

The Google Play publishing page is quite self-documenting, and there are many good guides out there on how to do this, so I won’t go through all the steps for filling out the form. In general, just fill out all the information it asks for, provide the images it needs, and make sure all the checkmarks in the left side bar are green. You can add all kinds of content here, so take your time with it. In the end, it will have an impact on how popular your app becomes.

Once that has been done, you can create a new release under App Releases and upload your APKs to it.

One thing to note is that the first time you do this, you will be asked if you want to allow Google Play to manage your app signing key.

For now, you will have to select to Opt Out of this. In order to use this feature, the application has to be in the new “Android App Bundle” format. This is not yet supported by Qt, but we are working to support this as well. In fact, Bogdan Vatra from KDAB (who is also the maintainer of the Android port of Qt) has already posted a patch which addresses the biggest challenge in getting such support in place.

When we do get support for it, it will make the release process a little bit more convenient. With the AAB format, Google Play will generate the optimized APKs for different architectures for us, but for now we have to do this manually by setting up multiple kits and building multiple APKs, as I have described in this tutorial.

When the two APKs have been uploaded to a release, you should see a listing such as this: Two separate APK packages, each covering a single native platform. By expanding each of the entries, you can see what the “Differentiating APK details” are. These are the criteria used for selecting one over the other when a device is downloading the APK from the Google Play Store. In this case, the differentiating detail should be the native platform.

And that is all there is to it: Creating and releasing a Qt application in Google Play with both 32-bit and a 64-bit binaries. When the APKs have been uploaded, you can hit Publish and wait for Google Play to do its automated magic. And if you do have existing 32-bit apps in the store at the moment, make sure you update them with a 64-bit version well before August 2021, as that is when non-compliant apps will no longer be served to 64-bit devices, even if they also support 32-bit binaries.

Until then, happy hacking and follow me on Twitter for irregular updates and fun curiosities.

The post How to comply with the upcoming requirements in Google Play appeared first on Qt Blog.

Week 4, Titler Tool and MLT – GSoC ’19

Friday 28th of June 2019 05:30:50 AM

Hi again!

It’s already been a month now, and this week – it hasn’t been the most exciting one. Mostly meddling with MLT, going through pages of documentation, compiling MLT and getting used to the MLT codebase.

With the last week, I concluded with the rendering library part and now this week, I began writing a new producer in MLT for QML which will be rendered using the renderering library. So I went through a lot of MLT documentation, and it being a relatively new field for me, here is what I’ve gathered so far:

At its core, MLT employs the basic producer-consumer concept. A producer produces data (here, frame objects) and a consumer consumes frames – as simple as that.

Producer —> Consumer

We have producers for different things which the current titler uses like qtext, qimage and kdenlivetitle. What these producers do is simple, take the case of kdenlivetitle, it loads an XML file, parses it, initializes producer properties and then the producer is ready to produce frames.

What I have to do for the next course of days is to write a new producer which loads QML, renders them (using my library) and then produce these frames. I’ve started with writing the new producer although the progress has been slow as I’m still wrapping my head around all the code and trying to figure out what my next step should be. You can look at the code here, although there isn’t much at the moment – producer_qml.c, qml_wrapper.*

Apart from that, the build system for the rendering library will soon be added to the MLT build system within the next few days and with that, I’ll be able to use the rendering library for the producer. And pretty soon, we should have a producer working hopefully!

 

KDE Applications 19.08 Schedule finalized

Thursday 27th of June 2019 08:42:39 PM

It is available at the usual place https://community.kde.org/Schedules/Applications/19.08_Release_Schedule

Dependency freeze is two weeks (July 11) and Feature Freeze a week after that, make sure you start finishing your stuff!


P.S: Remember last day to apply for Akademy Travel Support is this Sunday 30 of June!

New Facebook Account

Thursday 27th of June 2019 03:13:39 PM

Facebook is a business selling very targeted advertising channels.  This is not new, Royal Mail Advertising Mail service offers ‘precision targeting’.  But Facebook does it with many more precision options, with emotive impact because it uses video and feels like it comes from your friends and the option of anonymity.  This turns out to be most effective in political advertising.  There are laws banning political advertising on television because politics should be about reasoned arguments not emotive simplistic soundbites but the law has yet to be changed to include this ban on video on the internet. The result has undermined the democracy of the UK during the EU referendum and elsewhere.

To do this Facebook collects data and information on you.  Normally this isn’t a problem but you never know when journalists will come sniffing around for gossip in your past life, or an ex-partner will want to take something out of context to prove a point in diverse proceedings.  The commonly used example of data collection going wrong was the Dutch government keeping a list of who was Jewish, with terrible consequences when the Nazis invaded.  We do not have a fascist government here but you can never assume it will never happen.  Facebook has been shown to care little for data protection and allowed companies such as Cambridge Analytica to steal data illegally and without oversight.  Again this was used to undermine democracy using the 2016 EU referendum.

In return we get a useful way to keep in touch with friends and family and have discussions with groups and chat with people, these are useful services.  So what can you do if you don’t want your history to be kept by an untrusted third party?  Delete your account and you’ll miss out on important social interactions.  Well there’s an easy option that nobody seems to have picked up on which is to open a new account and move your important content over but dropping your history.

Thanks to the EU legislation GDPR we have a Right to Data Portability. This is similar but separate from the Right to Access.  And it means it’s easy enough to extract your data out of Facebook.  I downloaded mine and it’s a whopping 4GB of text and photos and Video.  I then set up a new account and started triaging anything I wanted to keep.  What’s in my history?

Your Posts and Other People’s Posts to Your Timeline

These are all ephemeral.  You post them, get some reaction, but they’re not very interesting a week or more later.  Especially all the automated ones Spotify sent saying what music I was playing.

Photos and videos

Here’s a big chunk.  Over 1500, some 2GB of pics, mostly of me looking awesome paddling.  I copied any I want to keep over to easy photo dump Google Photos. There was about 250 I wanted to keep.

Comments

I’ve really no desire to keep these.

Likes and reactions

Similarly ephemeral.

Friends

This can be copied over easily to a new account, you just friend your old account and then it’ll suggest all your old friends.  A Facebook friend is not the same as a real life friend so it’s sensible to triage out anyone you don’t have contact with and don’t find interesting to get updates from.

You can’t see people who have unfriended you, probably for the best.

Stories

Facebook’s other way to post pics to try to be cool with the Snapchat generation.  Their very nature is that they don’t stay around long so nothing important here.

Following and followers

This does include some people who have ignored a friend request but still have their feed public so that request gets turned into a follow.  Nobody who I deperately crave to be my friend is on the list fortunately so they can be ignored.

Messages

Despite removing the Facebook branding from their messaging service a few years ago it’s still very much part of Facebook.  Another nearly 2GB of text and pics in here.  This is the kind of history that is well worth removing, who knows when those chats will come back to haunt you.  Some more pics here worth saving but not many since any I value for more than a passing comment are posted on my feed.  There’s a handful of longer term group chats I can just add my new account back into.

Groups

One group I run and a few I use frequently, I can just rejoin them and set myself as admin on the one I run.

Events

Past events are obviously not important.  I had 1 future event I can easily rejoin.

Profile information

It’s worth having a triage and review of this to keep it current and not let Facebook know more than you want it to.

Pages

Some pages I’m admin or moderator of than I can rejoin, where moderator you need to track down an admin person to add you back in.

Marketplace, Payment history, Saved items and collections, Your places

I’ve never found a use for these features.

Apps and websites

It’s handy to use Facebook as a single sign on for websites sometimes but it’s worth reviewing and triaging these to stop them taking excess data without you knowing.  The main one I used was Spotify but it turns out that has long since been turned into a non-Facebook account so no bother wiping all these.

Other activity

Anyone remember pokes?

What Facebook Decides about me

Facebook gives you labels to give to advertisers.  Seems I’m interested in Swahili language, Sweetwater in Texas, Secret Intelligence Service and other curiosities.

Search history

I can’t think of any good reason why I’d want Facebook to know about 8 years of searches.

Location history

Holy guacamole, they keep my location each and every day since I got a smartphone.  That’s going to be wiped.

Calls and messages

Fortunately they haven’t been taking these from my phone history but I’m sure it’s only one setting away before they do.

Friend Peer Group

They say I have ‘Established Adult Life’.  I think this means I’m done.

Your address books

They did however keep all my contacts from GMail and my phone whenever I first logged on from a web browser and phone.  They can be gone.

So most of this can be dropped and recreated quite easily. It’s a fun evening going through your old photos.  My 4GB of data is kept in a cloud drive which can be accessed through details in my will so if I die and my autobiographer wants to dig the gossip on me they can.

I also removed the app from my phone.  The messenger app is useful but the Facebook one seems a distraction, if I want to browse and post Facebook stuff I can use the web browser.  And on a desktop computer I can use https://www.messenger.com/ rater than the distraction of the Facebook website.

And the first thing I posted?  Going cabogganing!

New account at https://www.facebook.com/jonathan.riddell.737 do re-friend me if you like.

 

My experience using Kdenlive on the 48 Hour Film Project

Thursday 27th of June 2019 02:59:16 PM

Cutelyst 2.8.0 released

Thursday 27th of June 2019 02:37:19 PM

Cutelyst a Qt/C++ Web framework got a new release!

This release took a while to be out because I wanted to fix some important stuff, but time is short, I’ve been working on polishing my UPnpQt library and on a yet to be released FirebaseQt and FirebaseQtAdmin (that’s been used on a mobile app and REST/WebApp used with Cutelyst), the latter is working quite well although it depends ATM on a Python script to get the Google token, luckly it’s a temporary waste of 25MB of RAM each 45 minutes.

Back to the release, thanks to Alexander Yudaev it has cpack support now and 顏子鳴 also fixed some bugs and added a deflate feature to RenderView, FastCGI and H2.

I’m also very happy we now have more than 500 stars on GitHub

Have fun https://github.com/cutelyst/cutelyst/releases/tag/v2.8.0

Little Trouble in Big Data – Part 2

Thursday 27th of June 2019 08:45:15 AM

In the first blog in this series, I showed how we solved the original problem of how to use mmap() to load a large set of data into RAM all at once, in response to a request for help from a bioinformatics group dealing with massive data sets on a regular basis. The catch in our solution, however, was that the process still took too long. In this blog, I describe how we solve this, starting with Step 3 of the Process I introduced in Blog 1:

3. Fine-grained Threading

The original code we inherited was written on the premise that:

  1. Eigen uses OpenMP to utilize multiple cores for vector and matrix operations.
  2. Writing out the results of the Monte Carlo simulation is time-consuming and therefore put into its own thread by way of OpenMPI with another OpenMPI critical section doing the actual analysis.

Of course, there were some slight flaws in this plan.

  1. Eigen’s use of OpenMP is only for some very specific algorithms built into Eigen itself. None of which this analysis code was using, so that was useless. Eigen does make use of vectorization, however, which is good and can in ideal circumstances give a factor of 4 speedup compared to a simplistic implementation. So we wanted to keep that part.
  2. The threading for writing results was, shall we say, sub-optimal. Communication between the simulation thread and the writer thread was by way of a lockless list/queue they had found on the interwebs. Sadly, this was implemented with a busy spin loop which just locked the CPU at 100% whilst waiting for data to arrive once every n seconds or minutes. Which means it’s just burning cycles for no good reason. The basic outline algorithm looks something like this:
const std::vector colIndices = {0, 1, 2, 3, ... }; const std::vector markerIndices = randomise(colIndices); for (i = 0; i < maxIterations; ++i) { for (j = 0; j < numCols; ++j) { const unsigned int marker = markerIndices[j]; const auto col = data.mappedZ.col(marker); output += doStuff(col); } if (i % numIterations == 0) writeOutput(output); }

So, what can we do to make better use of the available cores? For technical reasons related to how Markov Chain Monte Carlo works, we can neither parallelize the outer loop over iterations nor the inner loop over the columns (SNPs). What else can we do?

Well, recall that we are dealing with large numbers of individuals – 500,000 of them in fact. So we could split the operations on these 500k elements into smaller chunks and give each chunk to a core to process and then recombine the results at the end. If we use Eigen for each chunk, we still get to keep the SIMD vectorization mentioned earlier. Now, we could do that ourselves but why should we worry about chunking and synchronization when somebody else has already done it and tested it for us?

This was an ideal chance for me to try out Intel’s Thread Building Blocks library, TBB for short. As of 2017 this is now available under the Apache 2.0 license and so is suitable for most uses.

TBB has just the feature for this kind of quick win in the form of its parallel_for and parallel_reduce template helpers. The former performs the map operation (applies a function to each element in a collection where each is independent). The latter performs the reduce operation, which is essentially a map operation followed by a series of combiner functions, to boil the result down to a single value.

These are very easy to use so you can trivially convert a serial piece of code into a threaded piece just by passing in the collection and lambdas representing the map function (and also a combiner function in the case of parallel_reduce).

Let’s take the case of a dot (or scalar) product as an example. Given two vectors of equal length, we multiply them together component-wise then sum the results to get the final value. To write a wrapper function that does this in parallel across many cores we can do something like this:

const size_t grainSize = 10000; double parallelDotProduct(const VectorXf &Cx, const VectorXd &y_tilde) { const unsigned long startIndex = 0; const unsigned long endIndex = static_cast(y_tilde.size()); auto apply = [&](const blocked_range& r, double initialValue) { const long start = static_cast(r.begin()); const long count = static_cast(r.end() - r.begin()); const auto sum = initialValue + (Cx.segment(start, count).cast() * y_tilde.segment(start, count)).sum(); return sum; }; auto combine = [](double a, double b) { return a + b; }; return parallel_reduce(blocked_range(startIndex, endIndex, grainSize), 0.0, apply, combine); }

Here, we pass in the two vectors for which we wish to find the scalar product, and store the start and end indices. We then define two lambda functions.

  1. The apply lambda, simply uses the operator * overload on the Eigen VectorXf type and the sum() function to calculate the dot product of the vectors for the subset of contiguous indices passed in via the blockedRange argument. The initialValue argument must be added on. This is just zero in this case, but it allows you to pass in data from other operations if your algorithm needs it.
  2. The combine lambda then just adds up the results of each of the outputs of the apply lambda.

When we then call parallel_reduce with these two functions, and the range of indices over which they should be called, TBB will split the range behind the scenes into chunks based on a minimum size of the grainSize we pass in. Then it will create a lightweight task object for each chunk and queue these up onto TBB’s work-stealing threadpool. We don’t have to worry about synchronization or locking or threadpools at all. Just call this one helper template and it does what we need!

The grain size may need some tuning to get optimal CPU usage based upon how much work the lambdas are performing but as a general rule of thumb, it should be such that there are more chunks (tasks) generated than you have CPU cores. That way the threadpool is less likely to have some cores starved of work. But too many and it will spend too much time in the overhead of scheduling and synchronizing the work and results between threads/cores.

I did this for all of the operations in the inner loop’s doStuff() function and for some others in the outer loop which do more work across the large (100,000+ element) vectors and this yielded a very nice improvement in the CPU utilization across cores.

So far so good. In the next blog, I’ll show you how we proceed from here, as it turns out this is not the end of the story.

The post Little Trouble in Big Data – Part 2 appeared first on KDAB.

Krita 4.2.2 Released

Thursday 27th of June 2019 07:00:16 AM

Within a month of Krita 4.2.1, we’re releasing Krita 4.2.2. This is another bug fix release. We intend to have monthly bug fix releases of Krita 4.2 until it’s time to release 4.3, which will have new features as well. Here’s the list of bug fixes:

  • Text editor: make sure the background color is the one set in the settings (BUG:408344)
  • Fix a crash when creating a text shape (BUG:407554)
  • Make sure the text style is not reset when removing the last character in the text editor (BUG:408441)
  • Fix an issue on macOS where some libraries could not be loaded (BUG:408343)
  • Use a highlighted tool button in the selection tool option dockers so it’s easier to see which selection action is active
  • Fix the nearest neighbour transform algorithm (BUG:408182)
  • Fix a styling issue in the filter layers properties dialog (BUG:408171)
  • Fix an issue where if Krita was set to use a language other than English, vector strokes were drawn wrongly
  • Fix selecting colors from the combobox in the palette docker
  • Fix a crash when loading a broken KPL file (BUG:408447)
  • Fix an issue where a transparent pattern fill loader was loaded incorrectly (BUG:408169)
  • Make it possible to make the onion skin docker smaller (BUG:407646)
  • Improve loading GPL palettefiles with thousands of columns
  • Fix the slider widget to make it impossible to get negative values
  • Improve the tiff import/export filter (BUG:408177)
  • Fix loading the scripter Python plugin when using a language other than English
  • Improve the reference image tool and optimize loading images from clipboard
  • Make the camera raw import filter honor batch mode
  • Fix rendering of clone layers if the source layer is not visible (BUG:408167, BUG:405536)
  • Fix move and transform tools after a quick layer duplication (BUG:408593)
  • Fix a crash when selecting the opaque pixels on a transform mask (BUG:408618)
  • Fix loading sRGB EXR files (BUG:408485)
  • Make the new image dialog choose the last used option even when the user’s language has changed
  • Fix the “Enforce Palette Colors” feature (BUG:408256)
  • Update the brush preview on every brush stamp creation (BUG:389432)
  • Make it possible to edit vector shapes on duplicated vector layers (BUG:408028)
  • Hide the color picker button in the vector object properties docker, it’s unimplemented
  • Fix color as mask export in GIH and GBR brush tip export (BUG:389928)
  • Restore the default favorite blending modes
  • Add a header to all right-click menus on the canvas so the first thing under the cursor isn’t something dangerous, like ‘cut’ (BUG:408696)
  • Fix an incorrect condition when rendering animations where Krita would complain to be out of memory
  • Keep the community links in the welcome screen visible when changing theme (BUG:408686)
  • Check after saving whether the saved file can be opened and has correct contents
  • Improve the import/export error handling and reporting
  • Make sure the filter dialog shows up in front of Krita’s main window (BUG:408867)
  • Make sure that the contiguous selection tool provides the antialiasing switch (BUG:408733)
  • Fix the fuzziness setting in the contiguous selection tool
  • Fix putting the text shape behind every other shape on a vector layer after editing text (BUG:408693)
  • Fix switching the pointer type by stylus tip (BUG:408454, BUG:405747)
  • Fix an issue on Linux where switching from pen to mouse would prevent the mouse from drawing on the canvas (BUG:407595)
  • Fix a crash when the user undoes creating layers too quickly (BUG:408484)
  • Fix using .KRA and .ORA files as file layers (BUG:408087)
  • Clear all points in the outline selection on clicking (BUG:408439)
  • Fix a crash when using the fill tool in fast mode on a pixel selection mask
  • Fix merging layers with inactive selection masks (BUG:402070)
  • Remove default actions from the Reference Image tool that were inappropriate (BUG:408427)
  • Fix undo/redo not restoring the document to unmodified (BUG:402263)
  • Fix the deform tool leaving darkish traces when scrubbing a lot on a 16 bit canvas (BUG:290383)
  • Updated Qt to 5.12.4

Warning: on some Windows systems, we see that Krita 4.2.x doesn’t start. We haven’t found a system where we could reproduce this issue, and it seems it mostly has to do with those systems not having a working OpenGL or Direct3D driver. We’re working on a solution.

Download Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

OSX

Note: the gmic-qt is not available on OSX.

Source code md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

GSoC Update

Thursday 27th of June 2019 12:00:00 AM

Last post I said that I was having some problems pushing my modifications to the git repo. I discovered that the official ROCS repository was recently moved to the KDE Gitlab repo, called KDE Invent, where I am working on a fork of the original ROCS repo.

It is a model that I have some knowledge, as I already worked with gitlab in a past internship and did some merge requests because of the Hacktoberfest (I like to win t-shirts). So I had to update my remote of the local repo and I sent my update to my remote fork branch, called improved-graph-ide-classes.

When I was modifying the code, I noticed some problems with the creation of random trees, but I am thinking what is the better way to fix this part. This problem lies in the relation of the algorithm and the edge types available to generate the tree. When using directed edges, the code is sometimes generating directed loops of size 2 in the graph.

Theorically speaking, the definition of a tree must have undirected edges, otherwise it would be a Directed Acyclical Graph (DAG). There are different algoritms to generate them with different uses. For example, to generate random trees for a given number of nodes, you could choose an algoritm that can generate any tree of the given number of nodes with the same chance [paper] (that means, the algoritm has a uniform distribution of trees when a good seed is guaranteed). As for DAG’s, we can use an ranking algorithm to configure the height and width of the graph, which could be more useful in most cases. While there can exist a algorithm to generate both, I think it would be more useful to separate them.

But then enter the problem: It is not guaranteed that a undirected/directed edge type will exist within the Edge Types of the program, as the user has the freedom to add and modify any edge type. I found two ways to solve it:

  • Always have an undirected and a directed edge in the edge types;
  • Put a check on the interface to check whether the edge is directed or undirected when necessary;

This decision boils down to restrict or not the freedom of the user. Although this not change much on the greater scope of the program, we have to decide if the best way to go is to always force the user to check for the existence of a necessary edge type, or just set two edge types that will always exist and work by restricting the modification of the edges.

Next post I will talk about the identifier part of the ROCS system, as it is a part of ROCS that needs some more polish.

Konsole and Splits

Thursday 27th of June 2019 12:00:00 AM

Some terminals like Tilix and Terminator offers the possibility to split the screen recursively, and I started to add the same thing to konsole. Konsole is usually said to be the swiss army knife of the terminal emulators, and if you didn’t try it yet, please do. We offer quite a lot of things that no other terminal emulator offer.

Rigth now this code is in review but it currently supports quite a few things:

  • Allow to Drag & drop the tab to create a new window of konsole
  • Allow to Drag & drop the tab back into another window
  • Allow to Drag & drop a split to reposition it in the current tab
  • Allow to Drag & drop a split to another tab
  • Allow to Drag & drop a split to another window (if in the same process)

Expect this to be in the next version of konsole if all goes well. Help to test, help to find bugs, Help to test, help to find bugs.

Shubham (shubham)

Wednesday 26th of June 2019 05:46:11 PM
First month progressHello people!! I am here presenting you with my monthly GSoC project report. I will be providing the links to my work at the end of the section.A bit of background: Its been a great first month of Google Summer of Code for me. I was so excited that I had started writing code a week before the actual coding period started. First month as I had expected, had been quite hectic and to add onto it, my semester end examinations are also running at the moment. So I had to manage my time efficiently which I believed have done great so far. Coming to the progress made during this period, I have done the following:1.1 Implement PolkitQt1 Authorisation back-end:Here I had aimed to implement the same Polkit back-end as the one implemented by KAuth currently. I had to replicate the same behaviour and just remove the mediator ie. KAuth from in between.1.2 Scrap Public Key Cryptography code based on QCA as QDbus is secure enough:QDbus already provides enough security to the calls made by the application to the helper. Hence no need to encrypt, sign the requests of the application and verify their integrity at the helper side.1.3 Establish QDBus communication from helper towards Application:Previously the Application to Helper communication was done through QDBus session and Helper to Application was done via KAuth. In this task, I had aimed to remove KAuth and establish QDbus mode of communication here as well. I have linked the patches to the above tasks below in “Patches” section.Links to my patches: If you are a mind with curiosity, you can checkout the patches I have submitted over phabricator here.1. PolkitQt1 Authorization backend:2. Scrap Public Key Cryptography code (PKC):  3. QDBus communication from helper towards Application:Note: Only the second patch(scrap PKC) is merged into master, rest others are still Work under progress.Link to cgit repository: Curious minds may try having a look at the code and maybe give suggestions/advice about the code.
Till next time, bye bye!!

Shubham (shubham)

Wednesday 26th of June 2019 05:39:55 PM
What my project is all about? Porting Authentication to Polit-qt-1KDE Partition Manager runs all the authentication or authorization protocols over KAuth (KDE Authentication), which is a tier 2 library from KDE Frameworks. In the current implementation of KDE Partition Manager, all the privileged tasks such as executing some external program like btrfs, sfdisk etc. Or copying a block of data from one partition to the other, which requires escalated permissions to execute are executed by a helper non GUI application. So, instead of running whole GUI application (KDE Partition Manager) as root or superuser, a helper non GUI application is spawned which runs as root and executes privileged tasks. This helper program communicates with KDE Partition Manager over simple DBus protocol. The current implementation may seem a good idea, but is not, the reason being that KAuth is an extra layer added over Polkit-qt which causes extra overhead. So, the proposal for this project is to port all the authentication/authorization code from KAuth to Polkit-qt without effecting the original behaviour of KDE Partition Manager.

Shubham (shubham)

Wednesday 26th of June 2019 05:38:21 PM
About me...huh, Who am I?I am Shubham, a 3rd year undergraduate student, pursuing my B.E(Bachelor of Engineering) at BMS Institute of Technology and Management, Bangalore, India. I am an amateur open source enthusiast and developer, mostly working with C++ with Qt framework to build some stand alone applications. I also have descent knowlege of C, Java, Python, bash scripting, git and I love developing under the linux environment. I also practice competitve programming at various online judges. Apart from coding, in my spare time I go for Cricket or Volleyball to keep myself refreshed.

More in Tux Machines

Late Coverage of Confidential Computing Consortium

  • Microsoft Partners With Google, Intel, And Others To Form Data Protection Consortium

    The software maker joined Google Cloud, Intel, IBM, Alibaba, Arm, Baidu, Red Hat, Swisscom, and Tencent to establish the Confidential Computing Consortium, a group committed to providing better private data protection, promoting the use of confidential computing, and advancing open source standards among members of the technology community.

  • #OSSUMMIT: Confidential Computing Consortium Takes Shape to Enable Secure Collaboration

    At the Open Source Summit in San Diego, California on August 21, the Linux Foundation announced the formation of the Confidential Computing Consortium. Confidential computing is an approach using encrypted data that enables organizations to share and collaborate, while still maintaining privacy. Among the initial backers of the effort are Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. “The context of confidential computing is that we can actually use the data encrypted while programs are working on it,” John Gossman, distinguished engineer at Microsoft, said during a keynote presentation announcing the new effort. Initially there are three projects that are part of the Confidential Computing Consortium, with an expectation that more will be added over time. Microsoft has contributed its Open Enclave SDK, Red Hat is contributing the Enarx project for Trusted Execution Environments and Intel is contributing its Software Guard Extensions (SGX) software development kit. Lorie Wigle, general manager, platform security product management at Intel, explained that Intel has had a capability built into some of its processors called software guard which essentially provides a hardware-based capability for protecting an area of memory.

Graphics: Mesa Radeon Vulkan Driver and SPIR-V Support For OpenGL 4.6

  • Mesa Radeon Vulkan Driver Sees ~30% Performance Boost For APUs

    Mesa's RADV Radeon Vulkan driver just saw a big performance optimization land to benefit APUs like Raven Ridge and Picasso, simply systems with no dedicated video memory. The change by Feral's Alex Smith puts the uncached GTT type at a higher index than the visible vRAM type for these configurations without dedicated vRAM, namely APUs.

  • Intel Iris Gallium3D Is Close With SPIR-V Support For OpenGL 4.6

    This week saw OpenGL 4.6 support finally merged for Intel's i965 Mesa driver and will be part of the upcoming Mesa 19.2 release. Not landed yet but coming soon is the newer Intel "Iris" Gallium3D driver also seeing OpenGL 4.6 support. Iris Gallium3D has been at OpenGL 4.5 support and is quite near as well with its OpenGL 4.6 support thanks to the shared NIR support and more with the rest of the Intel open-source graphics stack. Though it's looking less likely that OpenGL 4.6 support would be back-ported to Mesa 19.2 for Iris, but we'll see.

The GPD MicroPC in 3 Minutes [Video Review]

In it I tackle the GPD MicroPC with Ubuntu MATE 19.10. I touch on the same points made in my full text review, but with the added bonus of moving images to illustrate my points, rather than words. Read more Also: WiringPi - Deprecated

today's howtos