Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE -
Updated: 10 hours 29 min ago

KSyntaxHighlighting - Over 300 Highlightings...

Sunday 25th of August 2019 12:06:00 PM

I worked yesterday again on the Perl script that creates the highlighting update site used by e.g. Qt Creator.

I thought it would be perhaps a good idea to create some simple human readable overview with all existing highlighting definitions, too.

The result is this auto-generated Syntax Highlightings page.

Astonishing enough, at the moment the script counts 307 highlighting definitions. I wasn’t aware that we already crossed the 300 line.

Still, it seems people miss some highlighting definitions, take a look at the bug list of KSyntaxHighlighting.

The bugs with requests for new definition requests got marked with [New Syntax].

I am actually not sure if we should at all keep bugs for such requests, if no patch is provided there to add such an highlighting. Obviously, we want to have proper highlighting for all stuff people use.

But, as we have these bugs at the moment, if you feel you have the time to help us, take a look. Some Perl 6 highlighting is appreciated, or any of the others there ;=) Or perhaps you have a own itch to scratch and provide something completely different!

Our documentation provides hints how to write a highlighting definition. Or just take a look at the existing XML files in our KSyntaxHighlighting repository.

Patches are welcome on the KDE Phabricator.

Otherwise, if you provide a new definition XML + one test case, you can attach them to the bug, too (or send them to

With test case an example file in the new language is wanted, with some liberal license. This will be used to store reference results of the highlighting to avoid later regressions and to judge the quality of the highlighting and later improvements.

We would prefer MIT licensed new files, if they are not derived from older files that enforce a different license, thanks! (in that case, it would be good to mention in some XML comment which file was used as base)

KDE Usability & Productivity: Week 85

Sunday 25th of August 2019 02:12:46 AM

I’m not dead yet! KDE’s new goal proposals have been announced, and the voting has started. But in the meantime, the Usability & Productivity initiative continues, and we’re onto week 85! We’ve got some nice stuff, so have a look:

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

Kate - Document Preview Plugin - Maintainer Wanted!

Saturday 24th of August 2019 04:46:00 PM

At the moment the Document Preview plugin that e.g. allows to preview Markdown or other documents layout-ed via embedding a matching KPart is no longer maintained.

You can find more information about why the plugin got abandoned in this phabricator ticket.

If you want to step up and keep that plugin alive and kicking, now is your chance!

Even if you don’t want to maintain it, you can help out with taking care of existing bugs for this plugin.

Just head over to the KDE Bugzilla bugs for this plugin.

Any help with this is welcome!

Preparing for KDE Akademy 2019

Saturday 24th of August 2019 08:15:00 AM

Less than two weeks to go until Akademy 2019! Quite excited to go there again, for the 16th time in a row now. Until then there’s quite a few things I still need to finish.


I got three talks accepted this time, which is very nice of course, but it also implies quite some preparation work. The topics are rather diverse, all three cover aspects I ended up looking into for various reasons during the past year, and where learned interesting or surprising things that seemed worthwhile to share.

Secure HTTP Usage - How hard can it be?

Day 1, 15:00 in U4-01, details.

This is the hardest one of the three talks to prepare, as I’m least familiar with the full depth of that subject. I did look into this topic during the past year as part of the KDE Privacy Goal, and I wasn’t too happy with what I found. So this talk will cover our current means of talking to remote services (QNetworkAccessManager, KIO, KTcpSocket, QSslSocket) and their issues (spoiler: there are more than you’d wish for), and how we could possibly move forward in this area.

To illustrate the problems, I’ve written a little demo application which can be found on KDE’s Gitlab, as well as the first patches to actually address the current shortcomings.

KPublicTransport - Real-time transport data in KDE Itinerary

Day 2, 14:00 in U4-08, details.

In last year’s presentation about KDE Itinerary I assumed that access to public transport real-time data would be a big blocker. I was wrong, after the talk I was contacted by people from the Navitia team, which lead me to discover all the existing work the Open Transport community has already done in this field, one of the biggest (positive) surprises for me last year. So here I’m going to show what’s available there for us, and our interface to this, the KPublicTransport framework.

KDE Frameworks on Android

Day 2, 15:00 in U4-08, details.

While Android isn’t really my area of expertise either, I ended up digging into this as part of the work of bringing KDE Itinerary to that platform, and again not being entirely happy with the current state. Platform specific code in applications, even more so error-prone string-based JNI code, isn’t really something I like to maintain. Pushing more of that to the libraries that are supposed to isolate applications from this (Qt and KDE Frameworks) is the logical step here.

This one looks like it’s going to be the most challenging one to squeeze into its given time slot. I also couldn’t stop myself here from implementing a prototype to address some of the issues mentioned above.


Due to popular demand I’ll also be hosting a BoF about creating custom KItinerary extractors, currently planned for Thursday, 9:30 in U2-02. Since most people will have had to travel to Akademy everyone should have sample data for train or plane tickets, and for accommodation bookings that we can look at to get supported. Having a very recent version of KItinerary Workbench installed will be useful for this (which as a result of preparing this is receiving inline editing capabilities for the extractor scripts, and is getting close to being able to reload even the extractor meta data without requiring an application restart). My goal for this would be to get more people into contributing new or improved custom extractor scripts for so far not supported booking data.

I’ll also try to bring the latest version of the Plasma Mobile Yocto demo, in case we want to do an embedded device BoF again.


While we have been fixing a few Akademy-related KDE Itinerary bugs already, one thing that doesn’t work yet is automatically integrating the Akademy events in to the itinerary (that’ll need the browser integration).

Until then, there is a manually created file that can be imported into KDE Itinerary here. At the moment this only contains the conference days, for the welcome and social event there are no locations available yet. Having the events in the itinerary will make the navigation features more useful, e.g. to find your way from your hotel to the event.

See you in Milan :)

I am going to Akademy

Friday 23rd of August 2019 10:12:28 PM

One more edition of KDE Akademy approaches, and here I am waiting for the day to pack my bags and get into a 15 hours adventure from Brazil to Milan.

As always I am excited to meet my friends, and have some fun with all people of the community. Discussing our present and future.

And this year I am going with a pack of KDE 3DPrinted key holders.

So, if you want one or more of these, hurry because I will only have around 50 units.

See you all very soon!

That’s all folks!

Final Days of GSoC 2019

Friday 23rd of August 2019 08:53:00 PM
Hello Friends! Final Evaluation is coming and this brings the GSoC project to an end. I believe it is my duty to let you know all about my contributions to Labplot during this project. I will try to make this post self-contained and will try to cover every detail. Let's try to answer some general questions. If something is left, please feel free to comment, I will get back to you asap.
What all Statistical Tests got added?This is the final list of statistical tests which got added during the course of the project:
  • T-Test
    • Two-Sample Independent
    • Two Sample Paired
    • One Sample
  • Z-Test
    • Two-Sample Independent
    • One Way ANOVA
    • Two Way ANOVA
  • Levene Test: To check for the assumption of homogeneity of variance between populations
  • Correlation Coefficient
    • Pearson's R
    • Kendall's Tau
    • Spearman Rank
    • Chi-Square Test for Independence
So as many of you must have noticed, I have added almost all the features which I have promised in the proposal. For one or two which are left (noted down in TODO), all the basic structure is already created and it will not be that difficult to complete them in future. 
These features are tested using automatic unit testing. How these statistical tests can be selected? You can choose a test using these beautiful docks.
Hypothesis Test DockCorrelation Coefficient DockFor T-Test, Z-Test and ANOVA, you have to go to Hypothesis Test Dock and for Correlation Coefficient, you have to go to Correlation Coefficient Dock.
Here is the live demo. It will show you how to reach these docks and what all tests are available. 

What should be Data Source Type to run these tests? Currently, you can perform the tests on the data contained in the spreadsheet. Labplot has its own spreadsheet. Spreadsheet

You can either, import data from a file (CSV and other formats) or SQL database directly to this spreadsheet.

Now, you just have to choose the columns on which test is to be performed, along with other options in the dock.

Note that only valid columns related to the tests will be shown.

There can be cases where, you don't have the access to the whole data, but you have the statistics of data like a number of observations, the sample mean, sample standard deviations or you have data in the form of the contingency table. These cases mostly occur in case of Z-Test (as data is huge) or in Chi-Square Test for Independence, and hence currently this second alternative is available only for these two tests. For other tests, the implementation can be extended in future.
So, the second alternative is the statistic table. In Statistic Table you can fill the data and then can continue with the test.

Here are the Statistic Tables for Z-Test and Chi-Square Test for Independence.

Chi-Square Test for Independence Statistic Table

Z-Test Statistic Table

You can enter data in these empty cells.

For Z-Test Statistic Table: Row Header (vertical header) is editable and you can give your own names to them

For Chi-Square Statistic Table: Both Row and Column Headers are editable. Moreover, you can change the number of rows and number of columns of this table dynamically from here in the dock:

Reducing the number of rows/columns can erase the data for those reduced columns/rows. Increasing again will create new cells.

Using Export to Spreadsheet, you can export this contingency table to a spreadsheet. Three columns will get appended to the spreadsheet after clicking on it by names: Independent Var.1, Independent Var2 and Dependent Var. Now, you can save the spreadsheet and next time you can simply open the spreadsheet and perform the test without having to fill the contingency table.

You can clear all the cells of the table using a clear button at the end of the table. Notice, none of the header's content will get erased using this.

What are these extra options in the Docks? You can uncheck this checkbox to remove the assumption of the equality of variance between both population means. Press Levene's Test button to check for homogeneity of variance. Equality of variance is assumed by default. For Two-Sample Independent T-Test, you can select whether to make this assumption or not but for One-Way ANOVA this assumption must be valid. 

Change Significance level from here. The default value is 0.05.

Set the null and alternate hypothesis from here.
For Two-Sample Tests:
μ = Population Man of Independent Var1μₒ = Population Man of Independent Var2
For One-Sample Tests:μ = Population Mean of Independent Varμₒ = Assumed Population mean

For  One-Sample Tests, you can set the assumed population mean from here.

This option is visible when the statistical tests can take variable 1 to be categorical such that variable 2 becomes the dependent variable. This checkbox is automatically checked when column mode in Independent variable 1 is TEXT. You can also check this option manually if the column contains class labels in numeric form.
Using this you can finally perform a statistical test.

So Finally! How to perform statistical tests on the data? I will show you various live demos. For the first demo, I will import data from a file, but for later demos, I will directly start from a pre-filled spreadsheet. 
These examples are taken from various websites and open-source application like JASP, etc.
Demo for Two-Sample Independent T-Test. Here, I will also show you the use of checkbox Independent Var. 1 Categorical. 

Now, I will give you a demo example of calculating Two-Sample Independent Z-Test using Statistic Table:

This is demo example od calculating Two Way ANOVA

This is a demo example of calculating the Pearson Correlation Coefficient Test:

This demo shows how to calculate the chi-square test for independence if you have contingency table and how to convert that table in a spreadsheet. 

How does Result Look like? Hopefully, you must have seen video demos till now. There you must have seen many result tables. I have also tried to explain what all each element in results view/table is. 
The Result view can be divided into these three parts:  <-------------------------------Title

<--------------------------------Summary View 

<-------------------------------Result View

In Summary view, you can see the summary of the data in the form of tables and texts. It is a QTextEdit Widget. The additional feature I have added is the ability to show different tooltips for separate words. There is no direct method for such a feature, so I have to subclass ToolTipTextEdit from QTextEdit. These tables are actually HTML Tables which are automatically generated by code. Since QTextEdit is HTML aware, so using HTML Tables gives the user feature-rich experience.
The Result view shows you the final result of the statistical test. It will also show errors in red colour if it encounters some error while performing the test. Here also, you can see the tooltip, but here tooltip is for every line and not for every word. 
Here are the screenshots of some results. 

What is left to do? 
  • Add more tooltips in Result and Summary
  • Check for assumptions using various tests (like Levene's Test).
  • Reimplement above features for data source type: Database.
  • Integrate various tests in one workbook to show a summary to the user in a few clicks.
  • All other minor TODOs are already written as comments in source code itself.
What are the future goals?We aim to generate a single self-contained report for the data, currently analysed by the user. This report will show the statistical analysis summary and graphs in one place, at a single click, without the need of the user to explicitly select or instruct anything unless he/she feels the need of doing so. The idea is to make the task of data analysis easy for the user and give him/her the freedom to play around with the data while keeping track of the changes occurring in different statistical parameters.The ConclusionSo finally! you made till the end of the post! Kudos to you. 
This also brings me to the end of the GSoC project. It was a very pleasant journey. I learnt a lot during these 3-4 months. I wouldn't have reached here without all the help and support of my mentors (Stefan Gerlach and Alexander Semke). They both are most chilled out persons I have worked up with and they had calmed and helped me a lot in difficult times and never got frustrated while correcting my stupidest mistakes. Huge Clap for Both of Them. 👏👏👏

Google Summer of Code 2019 may be ending but this is a start for me in this huge and friendly open source community. I will try to be as active here as possible and will not stop working on this project. 
There are many things left to be done but I think the basic structure is made already during this project and in future, these features can be extended very nicely. 
Thank you all for reading this till the very end. Will meet you all soon with new blog posts till then take care, bubye... Alvida, Shabba Khair, Tschüss

The Sprint

Friday 23rd of August 2019 06:17:25 PM

Hi -)) haven’t posted for some time, because I was busy travelling and coding for the first half of the month. From Aug 5 to Aug 9, I went to the Krita Sprint in Deventer, Netherlands.

According to Boud, I was the first person to arrive. My flight took a transit via Hong Kong where some flights were affected due to natural and social factors, but fortunately mine was not one of them. Upon arrival in Amsterdam I got a ticket for the Intercity to Deventer. Railway constructions made me take a transfer via Utrecht Centraal, but that was not a problem at all: the station has escalators going both up to the hall, and down to the platforms (in China you can only go to the hall by stairs or elevator (which is often crowded after you get off)). When I got out of Deventer Station, Boud immediately recognized me (how?!). It was early in the morning, and the street’s quietness was broken by the sound of me dragging my suitcase. Boud led me through Deventer’s crooked streets and alleys to his house.

For the next two days people gradually arrived. I met my main mentor Dmitry (magician!) and his tiger, Sagoskatt, which I (and many others) have mistaken for a giraffe. He was even the voice actor for Sago. He had got quite a lot of insights into the code base (according to Boud, “80%”) and solved a number of bugs in Krita (but he said he introduced a lot of bugs, ha!). Also I met David Revoy (my favourite painter!), the author of Pepper and Carrot. And Tiar, our developer who started to work full-time on Krita this year; she had always been volunteering to support other Krita users and always on the IRC and Reddit. And two of other three GSoC students for the year: Blackbeard (just as his face) and Hellozee. Sh_zam could not come and lost communications due to political issues, which was really unfortunate (eh at least now he can be connected). It is feels so good to be able to see so many people in the community – they are so nice! And it is such an experience to hack in a basement church.

On Aug 7 we went to the Open Air Museum. It displays a large extent of the history in the Netherlands, how their people lived. After a really delicious lunch we went out and started to do paintings. I was to paint on my Surface using Krita, but unfortunately it went out of battery so I had to gave up and painted on a postcard. The tram in the museum is my favourite one (I am always fond of transit) and they even have a carhouse where stood lots of old vehicles. Except for my head which hit the ceiling of the coach three times, everything that day was wonderful.

The next day was the main meeting. In the morning we discussed the development plans for Krita. Bugs. Stability. New features. David Revoy came up again with the docker size problem, which Boud simply called it “a Qt problem.” He said, “Yes I do know what to do with that, but new users probably don’t and thus we gotta address it and not solely blame Qt.” (Yeah it troubled me a lot as well!) Another thing closely related to me was building on Windows, which was largely neglected by KDE. In the afternoon the focus shifted to marketing. I did not know much about it, but it is a fact that we cannot produce electricity out of love. We spent quite a lot of time on the painting competition for Krita. Where it should be held. How to collect the paintings. How to filter out good pictures. Krita promotes new artists. They promote our software.

For the next two days people started leaving. I left on the 10th, and then slept for a whole day when I got to Nanjing (so tired…). On Aug 14th I left again for Toronto, and then restarted to write code and debug. I finally got the time to write this post today, as I finally fixed a crash in my project. It is almost finished, and soon another post would be made on it.

Cantor and the support for Jupyter notebooks at the finish line

Friday 23rd of August 2019 09:13:49 AM
Hello everyone! It's been almost three weeks since my last post and this is going to be my my final post in this blog. So, I want to summarize all my work done in this GSoC project. Just to remember again, the goal of the project was to add the support for Jupiter notebooks to Cantor. This format is widely used in the scientific and education areas, mostly by the application Jupyter, and there is a lot of content available on the internet in this format (for example, here). By adding the support of this format in Cantor we’ll allow Cantor users access this content. This is short description, if you more intersted, you can found more details in my proporsal.

In the previous post, I described the "maximum plan" of the Jupyter support in Cantor being mostly finished. What this means in practice for Cantor is:
  • you can open Jupyter notebooks
  • you can modify Jupyter notebooks
  • you can save modified Jupyter notebooks without loosing any information
  • you can save native Cantor worksheets in Jupyter notebook format
To test the implemented code I used couple of notebooks mentioned in „link to the earlier post“. But the Jupyter world doesn’t consist out of this small number of notebooks only, of course. So, it was interesting to confront the code with more notebooks available in the wild out there.

I recently discovered a nice repository of Jupyter notebooks about Biomechanics and Motor Control with 70 notebooks. I didn’t use these notebooks before for testing and validation and didn’t know anything about there content. 70 notebooks is quite a number and my assumption was these notebooks, without knowing them in detail, will cover many different parts and details of the specification of the Jupyter notebook format and will challenge my implementation to an extent that was not possible during my previous testing activities. So, this new set of notebooks was supposed to be new and good test content for further and stricter validation of Cantor.

I was not disappointed. After the first round of manual testing based on this content, I found issues in 7 notebooks (63 projects functioning correctly!), which I addressed. Now, Cantor handles all 70 notebooks from this repository correctly.

Looking back at what was achieved this summer, the following list summarizes the project:
  • the scope for mandatory features described in the project proposal was fully realized
  • the biggest part of optional features was finalized
  • some other new features were added to Cantor which were needed for the realization of the project like new result types, the supported for embedded mathematical expressions and attachments in Markdown Cells, etc.
  • the new implementation was tested and considered stable enough to be merged into master and we plan to release this with Cantor 19.12
  • new dedicated tests were written to cover the new code and to avoid regressions in future, the testing framework was extended to handle project load and save steps
I prepared some screenshoots of Jupyter notebooks that show the final result in Cantor:

Even though the initial goal of the project was achieved, there are still some problems and limitations in the current implementation:
  • for Markdown entries containing text with images where certain alignment properties were set or after image size manipulations, the visualization of the content is not always correct which is potentially a bug in Qt
  • because of small difference in syntax between MathJax used in Jupyter notebooks and Latex used for the actual rendering in Cantor, the rendering of embedded mathematical expressions is not always successful. At the moment Cantor shows an error message in such cases, but this message is often not very clear and helpful for the user
  • Qt classes, without involving the full web-engine, as used by Cantor provide only a limited and basic support for HTML. More complex cases like embedded Youtube video and JavaScript don’t work at all.
This is all for the limitations, I think. Let's talk about future plans and perspectives. In my opinion, this project has reached its initial goals, is finished now and will only need maintenance and support in terms of bug fixing and adjustment to potential format changes in future.

When talking more generally, this project is part of the current overall development activities in Cantor to improve the usability and the stability of the application and to extend the feature set in order to enable more workflows and to reach to a bigger audience with this. See 19.08 and 18.12 release announcements to read more about the developments in the recent releases of Cantor. Support of the Jupyter notebook format is a big step into this direction but this not all. We have already many other items in our backlog like for the UX improvements, plots integration improvements going into this direction. Some of this items will be addressed soon. Some of them are something for the next GSoC project next year maybe?

I think, that's all for now. Thank you for reading this blog and thank you for your interest in my project. Working on this project was a very interesting and pleasant period of my life. I am happy that I had this opportunity and was able to contribute to KDE and especially to Cantor with the support of my mentor Alexander Semke.
So, Bye.

KDE ISO Image Writer – Release Announcement

Thursday 22nd of August 2019 08:28:08 PM

My GSoC project comes to an end and I am going to conclude this series of articles by announcing the release of a beta version of KDE ISO Image Writer.

Highlights of the changes User Interface Revamp

The user interface of KDE ISO Image Writer has been revamped according to designs made by the KDE community (

Windows Build

In addition to the user interface changes, code changes have been made to allow KDE ISO Image Writer to compile and run on Windows.

N.B. The Windows build currently crashes when ISO images of certain distributions are used because of a segmentation fault caused by QGpgME on Windows.

Making Sink(ed) contacts accessible to Plasma-Phonebook App

Thursday 22nd of August 2019 08:08:31 PM

Plasma-phonebook is a contacts managing application for Plasma-Mobile. The app gets the data, i.e., contacts from the kpeople library. It acts as backend for the plasma-phonebook app.
The task is to reveal sink contacts, i.e., contacts synced using KDE sink API to kpeople. In other terms, we need to make sink’s contacts database as kpeople’s data-source. Sink is an offline-caching, synchronization and indexing system for calendars, contacts, mails, etc.

** “How contacts are synced using sink?” will be discussed in another blog. **

Here’s what happens. When Plasma-phonebook app is started, an instance of kpeople (the backend) is created, and following which all the datasources of kpeople are called upon to serve their master .
In this case, KPeopleSinkDataSource.

class KPeopleSinkDataSource : public KPeople::BasePersonsDataSource { public: KPeopleSinkDataSource(QObject *parent, const QVariantList &data); virtual ~KPeopleSinkDataSource(); QString sourcePluginId() const override; KPeople::AllContactsMonitor* createAllContactsMonitor() override; }; QString KPeopleSinkDataSource::sourcePluginId() const { return QStringLiteral("sink"); } AllContactsMonitor* KPeopleSinkDataSource::createAllContactsMonitor() { return new KPeopleSink(); }

All the logic of program the KPeople-Sink plugin lies in class KPeopleSink, whose temporary instance is returned in function createAllContactsMonitor [line number 14]. This class is inherited from KPeople::AllContactsMonitor. AllContactsMonitor needs to be subclassed by each datasource of kpeople.

All that is to be done is :
1. Fetch the list of addressbooks synced by sink.
2. Get resource-id of these address-books.
3. Fetch the list of contacts for that addressbook
4. Assign a unique URI to every contact
5. Create an object of a class inherited from AbstractContact. AbstractContact is the class to provide the data from a given contact by the backends. It’s virtual function customProperty, a generic method to access a random contact property needs to be defined in the subclass.
6. Emit contactAdded signal.
Contacts are now aadded to kpeople and hence, accessible to plasma-phonebook! Simple \o/

This is what code looks like:

void KPeopleSink::initialSinkContactstoKpeople(){ //fetch all the addressbooks synced by sink const QList<Addressbook&gt; sinkAdressbooks = Sink::Store::read<Addressbook&gt;(Sink::Query()); Q_FOREACH(const Addressbook sinkAddressbook, sinkAdressbooks){ //to get resourceId QByteArray resourceId = sinkAddressbook.resourceInstanceIdentifier(); //fetch all the contacts synced by sink const QList<Contact&gt; sinkContacts = Sink::Store::read<Contact&gt;(Sink::Query().resourceFilter(resourceId)); Q_FOREACH (const Contact sinkContact, sinkContacts){ //get uri const QString uri = getUri(sinkContact, resourceId); //add uri of contact to set m_contactUriHash.insert(uri, sinkContact); KPeople::AbstractContact::Ptr contact(new SinkContact(sinkContact)); Q_EMIT contactAdded(uri,contact); } } }

Now, suppose plasma-phonebook app is running and in the meanwhile sink syncs the updated addressbook from server. In that case, next task becomes to make the changes visible in phonebook. We set up a notifier for each resource-id that notifies everytime a contact/addressbook with that resource-id is updated.

m_notifier = new Notifier(resourceId); m_notifier->registerHandler([=] (const Sink::Notification &notification) { if (notification.type == Notification::Info && notification.code == SyncStatus::SyncSuccess) { //Add program logic for updated addressbook/contact } });

A hash-table can be maintained to keep track of new contacts being added. Fetch the list of contacts synced by sink. Each time a contact in the list is not present in hash-table, it means a new contact has been added on the server and hence, emit contactAdded signal. New contact will be added to kpeople and hence accessible in plasma-phonebook.

const QList<Contact&gt; sinkContacts = Sink::Store::read<Contact&gt;(Sink::Query().resourceFilter(resourceId)); Q_FOREACH (const Contact sinkContact, sinkContacts){ const QString uri = getUri(sinkContact, resourceId); if(!m_contactUriHash.contains(uri)){ m_contactUriHash.insert(uri, sinkContact); KPeople::AbstractContact::Ptr contact(new SinkContact(sinkContact)); Q_EMIT contactAdded(uri,contact); } }

Similarly, we can code for cases where a contact is updated or deleted.

And with this, Tada!
Our system for Nextcloud CardDav Integration on Plasma Mobile is here \o/

You can follow the similar procedure if you want to create your own contact datasource for KPeople

KPeople-Sink plugin’s code is available at:
Contributions to the project are welcomed

Day 88

Thursday 22nd of August 2019 03:00:42 AM

Today, I’ll talk about my GSoC experience and won’t focus so much on Khipu, but in the next days I’ll publish a post about Khipu and what I’ve done.

As I said in the old posts, the begin was the most complicated part for me. I made a project thinking that I’d be able to complete, I started studying the code and the things I’d make many weeks before the start. But I couldn’t understand the code and I think it’s my fault. I even lost three weeks after the start stuck in this situation. It was hard for me, because I was really scared about failing and at the same time dealing with my college stuff, because in Brazil, our summer (and our summer vacation), is in December-February, in July we have a three week vacation, but GSoC lasts three months. I wasn’t having a good time at college as well, but with the help of my mentors I found a way to deal with the both things and as everything went well.

After this complicated start, to not fail, my mentor suggested that I could change my project. My initial project was to create new features to Khipu and Analitza (Khipu’s main library) to make it a better application and move it out from beta. Then, my new project was to refactor Khipu (using C++ and QML). I was scared because I didn’t know if I’d be able to complete it, but the simplicity of QML helped me a lot, and before the first evaluation (approx. two weeks after I decided my new project) I finished the interface, or at least most of it.

During the second period, in the start, I slowed down my code activities because I was in the end of my college semester and I had to focus on my final tests. But after this, I started to work in the backend and learned about models, connections between C++ and QML, and the most important: I improved my programming skills. I needed to learn to build a real program, and not making small patches as I used to do in my open source contributions history. In the end of this period, the screen was already working and showing 2D and 3D spaces and plotting functions.

So here I am, in the end of the third and last period writing about this experience. In this period, I worked in the program buttons and options, created dialogs, improved the readability of my code, documented my functions and files, and the technical stuff that I’ll explain in the next post.
This period is being nice, because I gained so much maturity, not only for coding, but to my life. I feel I’m able to fix the bugs that still are there and to deal with new situations in my professional and even in my personal life.

This post has the purpose to share my experience to the students that will participate on the next years. So I’d like to give two advices based on my experience:
1- don’t freak out, ask for help. The time that you lose scared and nervous, use it to think rationally on what you can do to solve that problem.
2- make a weekly activities schedule, it maybe will sound obvious, but after I started doing it, my productivity increased a lot, because is easier to control how much time I’m working and allocate time to my other activities.
And, of course, I’d like to say to KDE, Google and my mentors: thanks for this opportunity.

Second Evaluation

Wednesday 21st of August 2019 08:00:00 PM
I would say the second evaluation period: June 28, 2019 - July 22, 2019, was one of the most exciting period of my whole GSoC project. I had passed in the first evaluation, received payment and had gone partying with my brother. Things also got easier and I started enjoying coding in Qt.

During this period, I would say that I face these two major problems.

1) For the first evaluation, I had not done unit testing. I had only tested my features manually. So, It was time to create unit tests for each feature. I started reading about Qt unit tests, analysed source code of unit tests and wrote my own for previously implemented features. These unit tests were just not enough as I was taking a huge relative error of around 0.1. At that time my mentor pointed it out but I couldn't able to understand properly. So we left it for a while and started with new features. After completing new features (almost of what was intended to be done according to the proposal), I started creating unit tests for these features as well. My relative error was huge again, but this time, I didn't leave it for the future. So, I discussed it with my mentor again and then we caught the issue. The issue was, I was not taking good precision in correct answers with which values had to be compared. I recalculated all answers using a calculator and then placed in it the code. Now, I am getting around 1.e-5 accuracy, which I think is decent enough. This recalculation for then I had only done for newly implemented features, ones which are implemented for first evaluation is still left to be recalculated. This is explained in my report for the second evaluation.

2) The second problem was the size of source code files. My portion of the code now became large. The problem was that introducing new features will increase the size of the source file by a huge amount and it is not a good practice to add everything in one file, and we also didn't want to repeat and rewrite same portions of the code everywhere. So, then we used the inheritance concept. I created base classes (GeneralTest) for all the classes created so far and subclassed (HypothesisTest and CorrelationCoefficient) from it. This is also explained in my report for the second evaluation.

For more technical details of what all is done in this period, you can refer to my report:

Thank you so much for your interest and reading so patiently. Bye, take care.. see you soon.

First Evaluation

Wednesday 21st of August 2019 07:21:00 PM
After Bonding Period, Official Coding period started. I also started introducing the feature the first time from my side to any software. Till then, I had only changed portions of the code or added new ones in already existing files. Now, I had to add new source files. I started looking at the source code more deeply and started tracing back source files to its roots.

I become overwhelmed by seeing so many files interlinked so deeply with each other. For the first time, I was working on such a big project and now I had to contribute something real. So many lines of code confused me a lot. I wanted to get to the root of the code so that I can create my own branches in it. But it was not that easy for me. Moreover, till that time I wasn't aware of View/Model architecture and Signal Slot Mechanism which is the core of Qt programming.

I started seeking help. Then one of the Professor in my institute calms me down and advised me to start with main.cpp file in source code. In so much hassle, I completely forgot that C++ programs have all its roots down to main.cpp file. I started looking at the main file and though I understood very less from there, I felt quite relaxing. Since in the boding period I have worked on Pivot Table with my mentor Alexander Semke. I started only focusing on the pivot table and then I tracked down all the code written for the pivot table and how it is executed.  Then after a few days, the picture became somewhat clear and I got a rough idea of how to introduce a new feature in the spreadsheet. But problems aren't over yet. Another major problem was a very little knowledge of Qt, so I contacted my mentors for this and they advised me to go through the Qt Documentation thoroughly. As I already told you at my starting posts that Qt documentation is vastly elaborated one with examples and tutorials, it actually saved my day and in a day or two, I become completely ready to add my own new features. So it took me almost a week to get to the point where I can write my own source codes.

At the start, I wrote codes according to my coding style. But when I submitted my work for review request, my mentors informed me about the coding style and also gives me various fixes. It took me little time to adopt the style but everything afterwards become much easier.

Here is my documentation for FIrst Evaluation:

Thank you very much for your patience and your interest in my project. Will meet you soon. Till then bye... and take care...

Bonding Period

Wednesday 21st of August 2019 06:36:00 PM
So my project got accepted and I entered into what is called the Community Bonding Period. This period is included in the GSoC timeline to let the students who got accepted get to know more about their community and mentors. 
During this period, we worked on adding small options in matrices (matrices can be designed in Labplot). These options are already added in the spreadsheet and I had to somewhat clone those to matrices. Then I also worked on the pivot table feature with Alexander Semke. Actually, he created the whole base and I only worked on showing hierarchical headers into it.  Parallelly, I also started looking at other softwares like SOFAJASP to know what all features are implemented there, how are they implemented and what things are missing in them. This helped me a lot to know what all should be done in the project and what should be the end result. 

Project Proposal

Wednesday 21st of August 2019 04:47:00 PM
I started working on junior bugs with the help of mentors. After my mentor got confidence over me and I got a little overview of the project, he asked me to started thinking of the project and to make a proposal.

So, we discussed what all to be done during these three months time and what all features need to be included. Our aim was to add statistically significant features to labplot. At that time labplot supported Regressions, Histogram Plots, Location Measures, Dispersion Measures and Shape Measures for statistical analysis but these features were not sufficient. They fail to answer some broad questions like, Is there any statistically significant difference between two or more groups? or What is the strength of the relationship between two variables and is this strength statistically significant to say that the two variables are correlated?.
The idea was to add more statistical relevant measures, which can answer the above-posed
questions. Along with adding these features, we planned on creating Statistical Report of some sort which would be self-sufficient to show all important graphs and summary using a single click. We also decided to add tips each value outputted by tests so that the user who doesn't have much knowledge of statistical analysis can also perform tests while the user who is just interested in numbers can get a distract free view.

You can view my full proposal here:

Thanks for your patience... :) 

How I got a project in Labplot KDE

Wednesday 21st of August 2019 04:11:00 PM
When the organisation names got revealed, I started digging into projects to find out which one interests me a lot. Many projects were quite interesting but my restriction was coding language. Still, I started keeping a note of all the projects which interest me with a little note on what interests me in them and what are the skills required for that particular project. After completing the list, I shifted my focus from a huge number of projects to very few (almost 50). Originally, I planned to create a ranking of some sort and then start attacking individually from the top each one of them. But fortunately, I didn't have to do that much. When I was taking a quick look at the list, I notice labplot and it seems a perfect project for me. I had used KDE desktop before and I knew how big the KDE community is. I had already done some work on data analysis before (as a course and during my last year summer internship) and the language requirement was C++ and Qt. Though Qt (more broadly I can say GUI Programming) was new to me, looking at the basic documentation, I realised it would not be much difficult as it is like C++ with extended features, also, the documentation of Qt is quite elaborated. 
Looking all this, I decided to first try my best for this project and I mailed the community about my interest in the project and why I think, I would be able to complete the project. I also told them that this will be my first contribution in open source and am looking forward to contributing in open source.  
I wasn't expecting a reply as there was too much rush. I knew the mentors would be getting too much of mails from many students. But to my surprise, I got a reply from both of my mentors (Stefan Gerlach and Alexander Semke) in a day. They welcomed me warmly to the community and also offer me help to solve junior bugs to adopt the workflow of the Labplot and to get a glimpse of the open-source community. 

LabPlot's Welcome screen and Dataset feature in the finish line

Wednesday 21st of August 2019 03:12:20 PM
Hello Everyone! This year's GSoC is coming to its end. Therefore I think that I should let you know what's been done since my last blog post. I would also like to evaluate the progress I managed to make and the goals set up at the beginning of this project. 
As I told you in my last post, my main goal, in this last period, was to clean up, properly document, refactor, optimise the code and make it easier to read, so it would be fit to be brought to the master branch and to be used by the community.
My next proposition was to search for bugs and fix them, in order to make the implemented features more or less flawless. I can happily state, that I succeeded in this.
As proposed, I implemented a unit test for the main dataset related features. This proved to be useful since it helped discover some hidden problems which were all corrected. The main features, that tests were developed for, are: 
  • processing the metadata files in order to list the available collections, categories, subcategories and datasets
  • downloading and processing of datasets based on the information stored in the metadata files
  •  adding new datasets to the already existent ones (by creating new collections, categories and subcategories)
 I managed to create some "real" example projects so the users can explore the possibilities provided by LabPlot. These include the already existing ones (3 by count) and I added some dataset related ones. I also proceeded with the collecting of datasets. I managed to double the previous amount. Now we have 2000 datasets categorized, which is already a considerable amount.
As I said, the main priority was to improve and correct the code and not to develop major new features. However, some minor new ideas were still implemented.  The first step was to improve the downloading and naming of the datasets. I also developed some caching mechanism for the downloaded files. When downloading a file, if an earlier copy is available it needs to be checked whether or not it was created earlier than 1 day. If it was, then the file needs to be deleted in order to proceed with the downloading, otherwise the downloading isn't necessary.
Another, more observable, feature is that the users can now easily display the full name and the description of the selected dataset, so they may be able to retrieve additional information before choosing a dataset to work with. Previously the users could also see the short name of the dataset which wasn't really practical since it didn't really offer much information about the dataset itself. I also added a counter to every collection, category and subcategory so now the users can see how many datasets belong to the previously listed units of categorization. I thought it might be useful. The following demo video will present these new features in use: 
 Show Details functionality of ImportDatasetWidget
I also created a backup functionality for ImportDatasetWidget. When the users press the "Refresh" button the metadata files "shipped with LabPlot" are copied to the "dataset folder", and the files which were previously there are kept as a backup. This was needed because by pressing refresh any newly added dataset disappears. This can be very unpleasant if done unintentionally, I speak out of experience. So by pressing the "Restore" button the effect of the "Refresh" button can be undone, meaning that the current metadata files are deleted, the backup files are restored and the widget is updated accordingly. The following video illustrates this functionality:
 Restore backup functionality of ImportDatasetWidget
As I've already presented you the improvements and new features, I think it's high time I present to you an overall image of what has been accomplished during this summer's GSoC. I will present the new features in two groups: dataset related and welcome screen related. I will provide a demo video for each of them.
I'd like to start with the welcome screen since that will be the first thing people will see when they start using LabPlot. I think that the welcome screen turned out to be quite good looking, clean and simple (in the best possible meaning). It is quite responsive and by allowing the users to resize the different sections, it is customisable to a degree. I think it provides a great deal of functionality that can be useful for the great majority of the users, and it also helps in the process of getting to know LabPlot. In the next demo video I will present every aspect of LabPlot's welcome screen:
 LabPlot's welcome screen
The other main part of the project was to implement functionality to import online datasets. This could be useful for those who like working with data or in the field of statistics. These datasets can be visualized by creating different plots: curves and histograms.  This way LabPlot may be used for educational purposes as well. LabPlot comes with a great number of datasets which may be expanded individually by adding own datasets. I really hope that the users will find this new functionality useful and practical. Here is a video that demonstrates adding a new dataset and creating some plots based on it:
 LabPlot's Dataset Functionality
And finally here comes the evaluation/summary. I truly think that every feature presented in my proposal is implemented and working. So I think the main aim is met: LabPlot now has full support for downloading and processing online datasets and has a gorgeous welcome screen. There were difficulties along the way, but with work, and help from my mentors, everything was dealt with. As I said everything works, but some unforeseen bugs or errors might appear in the future. Some steps for the future may be to improve the overall performance of the new features and improving the overall look of the welcome screen.
Working on this project for the last 3 months has been a great experience, and I can honestly say that I improved my coding skills and way of thinking in a way I haven't even dreamt for. I also improved my ability to work alone and solve problems on my own. I'm more than grateful to my mentor, Kristóf, who always helped me and had my back if I encountered any hardship. I'd also like to express my gratitude towards Alexander Semke, my mentor's former mentor and an invaluable member of the LabPlot team, who also helped me a great deal. I am determined to further improve the features, even after GSoC ended. I would be more than happy to stay in this amazing team and help them whenever they need me. It's my goal to work, in the future, on LabPlot with them, since I really liked being part of this team. I truly think that people should contribute more to the open-source community, and the end of GSoC shouldn't mean the end of the involvement as well.
This is it, guys. Thank you for reading the post, and thank you for your interest in my project. If there's any development regarding my project or a new task (in the future) I'll let you know here. Bye :)

My Introduction to GSoC

Wednesday 21st of August 2019 02:50:00 PM
Last year, during my summer vacations (May 2018 to July 2018), I was doing my summer internship at the Indian Institute of Technology Delhi. I was working with some of the wonderful students and researchers on the project and soon we all started getting to know each other well. I was a juniormost student there so I get to learn a lot from them. 
At that time one of my friend there (who is also my senior) told me about Google Summer of Code. She told me, how many her friends got benefitted from it and how their technical and communication skills increased drastically. I became curious, search about it a little on the internet, but got busy in project and college assignments and it slipped my mind. But in the month of February ( I think it was 21-22 Feb 2019), I was exploring various internship options and suddenly the idea of applying to GSoC struck my mind. I visited the official website and found out luckily the GSoC process had not started yet and in two-three days they were going to announce this year's accepted organisations. I started digging deep about it more and started reading about past students experiences and started the process of finding the best project for me which I can be comfortable doing and learn new things from it at the same time. 

Welcome Note

Wednesday 21st of August 2019 02:36:00 PM
Hello Visitors! Thank you for your interest in my blog. Through this blog, I will try my best to share and document my experiences and work done during the Google Summer of Code.

A little Intro about me: I am in my fourth year now of my Bachelor's Degree in Electrical Engineering from the Indian Institute of Technology Dharwad. I am doing competitive coding from the past three years and I love to automate stuff and to write faster codes with less BIg Oh complexity. 

Distributed Beta Testing Platforms

Wednesday 21st of August 2019 10:10:12 AM

Do they exist? Especially as free software? I don’t actually know, but I’ve never seen a free software project use something like what I’ve got in mind.

That would be: a website where we could add any number of test scenarios.

People who wanted to help would get an account, make a profile with their hardware and OS listed. And then a couple of weeks before we make a release, we’d release a beta, and the beta testers would login and get randomly one of the test scenarios to test and report on. We’d match the tests to OS and hardware, and for some tests, probably try to get the test executed by multiple testers.

Frequent participation would lead to badges or something playful like that, they would be able to browse tests, add comments and interact — and we, as developers, we’d get feedback. So many tests executed, so many reported failure or regressions, and we’d be able to improve before the release.

It would be a crowd-sourced alternative to something like Squish (which I’ve never found to be very useful, not for Krita, not at the companies where it was used), it would make beta testers not just feel useful, it would make beta testing useful. Of course, maintaining the test scripts would also take time.

It sounds simple to me, and kind of logical and useful, but given that nobody is doing this — does such a thing exist?

More in Tux Machines

MX-19 Release Candidate 1 now available

We are pleased to offer MX-19 RC 1 for testing purposes. As usual, this iso includes the latest updates from debian 10.1 (buster), antiX and MX repos. Read more

The Linux Mint 19.2 Gaming Report: Promising But Room For Improvement

When I started outlining the original Linux Gaming Report, I was still a fresh-faced Linux noob. I didn’t understand how fast the ecosystem advanced (particularly graphics drivers and Steam Proton development), and I set some lofty goals that I couldn’t accomplish given my schedule. Before I even got around to testing Ubuntu 18.10, for example, Ubuntu 19.04 was just around the corner! And since all the evaluation and benchmarking takes a considerable amount of time, I ended up well behind the curve. So I’ve streamlined the process a bit, while adding additional checkpoints such as out-of-the-box software availability and ease-of-installation for important gaming apps like Lutris and GameHub. Read more

Something exciting is coming with Ubuntu 19.10

ZFS is a combined file system and logical volume manager that is scalable, supplying support for high storage capacity and a more efficient data compression, and includes snapshots and rollbacks, copy-on-write clones, continuous integrity checking, automatic repair, and much more. So yeah, ZFS is a big deal, which includes some really great features. But out of those supported features, it's the snapshots and rollbacks that should have every Ubuntu user/admin overcome with a case of the feels. Why? Imagine something has gone wrong. You've lost data or an installation of a piece of software has messed up the system. What do you do? If you have ZFS and you've created a snapshot, you can roll that system back to the snapshot where everything was working fine. Although the concept isn't new to the world of computing, it's certainly not something Ubuntu has had by default. So this is big news. Read more

Pack Your Bags – Systemd Is Taking You To A New Home

Home directories have been a fundamental part on any Unixy system since day one. They’re such a basic element, we usually don’t give them much thought. And why would we? From a low level point of view, whatever location $HOME is pointing to, is a directory just like any other of the countless ones you will find on the system — apart from maybe being located on its own disk partition. Home directories are so unspectacular in their nature, it wouldn’t usually cross anyone’s mind to even consider to change anything about them. And then there’s Lennart Poettering. In case you’re not familiar with the name, he is the main developer behind the systemd init system, which has nowadays been adopted by the majority of Linux distributions as replacement for its oldschool, Unix-style init-system predecessors, essentially changing everything we knew about the system boot process. Not only did this change personally insult every single Perl-loving, Ken-Thompson-action-figure-owning grey beard, it engendered contempt towards systemd and Lennart himself that approaches Nickelback level. At this point, it probably doesn’t matter anymore what he does next, haters gonna hate. So who better than him to disrupt everything we know about home directories? Where you _live_? Although, home directories are just one part of the equation that his latest creation — the systemd-homed project — is going to make people hate him even more tackle. The big picture is really more about the whole concept of user management as we know it, which sounds bold and scary, but which in its current state is also a lot more flawed than we might realize. So let’s have a look at what it’s all about, the motivation behind homed, the problems it’s going to both solve and raise, and how it’s maybe time to leave some outdated philosophies behind us. Read more