Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 11 hours 41 min ago

KMyMoney 5.0.6 released

Monday 19th of August 2019 07:48:58 PM

The KMyMoney development team today announces the immediate availability of version 5.0.6 of its open source Personal Finance Manager.

Another maintenance release is ready: KMyMoney 5.0.6 comes with some important bugfixes. As usual, problems have been reported by our users and the development team fixed some of them in the meantime. The result of this effort is the brand new KMyMoney 5.0.6 release.

Despite even more testing we understand that some bugs may have slipped past our best efforts. If you find one of them, please forgive us, and be sure to report it, either to the mailing list or on bugs.kde.org.

From here, we will continue to fix reported bugs, and working to add many requested additions and enhancements, as well as further improving performance.

Please feel free to visit our overview page of the CI builds at https://kmymoney.org/build.php and maybe try out the lastest and greatest by using a daily crafted AppImage version build from the stable branch.

The details

Here is the list of the bugs which have been fixed. A list of all changes between v5.0.5 and v5.0.6 can be found in the ChangeLog.

  • 408361 Hardly distinguishable line colors in reports
  • 410091 Open sqlite database under kmymoney versions >= 5.0
  • 410865 Access to german online banking requires product key
  • 411030 Attempt to move one split to new category moves all splits with same category

Here is the list of the enhancements which have been added:

  • The default price precision for the Indonesian Rupiah in new files has been raised to 10 decimals.

Kontact and Google Integration Issues

Monday 19th of August 2019 06:53:00 PM

Lately there were some issues with the Google integration in Kontact which caused that it is no longer possible to add new Google Calendar or Gmail account in Kontact because the log in process will fail. This is due to an oversight on our side which lead to Google blocking Kontact as it did not comply with Google’s policies. We are working on resolving the situation, but it will take a little bit.

Existing users should not be affected by this - if you already had Google Calendar or Gmail set up in Kontact, the sync should continue to work. It is only new accounts that cannot be created.

In case of Gmail the problem can mostly be worked around when setting up the IMAP account in KMail by selecting PLAIN authentication1 method in the Advanced tab and using your email and password. You may need to enable Less Secure Applications in your Google account settings in order to be able to log in with regular email address and password.

If you are interested in the technical background of this issue, the problem comes from Google’s OAuth App Verification process. When a developer wants to connect their app to a Google service they have to select which particular services their app needs access to, and sometimes even which data within each service they want to access. Google will then verify that the app is not trying to access any other data or that it is not misusing the data it has access to - this serves to protect Google users as they might sometimes approve apps that will access their calendars or emails with malicious intent without them realizing that.

When I registered Kontact I forgot to list some of the data points that Kontact needs access to. Google has noticed this after a while and asked us to clarify the missing bits. Unfortunately I wasn’t able to react within the given time limit and so Google has preemptively blocked login for all new users.

I’m working on clarifying the missing bits and having Google review the new information, so hopefuly the Google login should start working again soon.

  1. Despite its name, the PLAIN authentication method does not weaken the security. Your email and password are still sent safely encrypted over the internet. 

Qt Visual Studio Tools 2.4 RC Released

Monday 19th of August 2019 05:17:08 PM

We have released Qt Visual Studio Tools 2.4 RC (version 2.4.0); the installation package is available in the Qt download page. This version features an improved integration of Qt tools with the Visual Studio project system, addressing some limitations of the current integration methods, most notably the inability to have different Qt settings for each project configuration, and lack of support for importing those settings from shared property sheets.

Using Qt with the Visual Studio Project System

The Visual Studio Project System is widely used as the build system of choice for C++ projects in VS. Under the hood, MSBuild provides the project file format and build framework. The Qt VS Tools make use of the extensibility of MSBuild to provide design-time and build-time integration of Qt in VS projects — toward the end of the post we have a closer look at how that integration works and what changed in the new release.

Up to this point, the Qt VS Tools extension managed its own project settings in an isolated manner. This approach prevented the integration of Qt in Visual Studio to fully benefit from the features of VS projects and MSBuild. Significantly, it was not possible to have Qt settings vary according to the build configuration (e.g. having a different list of selected Qt modules for different configurations), including Qt itself: only one version/build of Qt could be selected and would apply to all configurations, a significant drawback in the case of multi-platform projects.

Another important limitation that users of the Qt VS Tools have reported is the lack of support for importing Qt-related settings from shared property sheet files. This feature allows settings in VS projects to be shared within a team or organization, thus providing a single source for that information. Up to now, this was not possible to do with settings managed by the Qt VS Tools.

To overcome these and other related limitations, all Qt settings — such as the version of Qt, which modules are to be used or the path to the generated sources — will now be stored as fully fledged project properties. The current Qt Settings dialog will be removed and replaced by a Qt Settings property page. It will thus be possible to set the values of all Qt settings according to configuration, as well as import those values from property sheet files.

 

A closer look

An oversimplified primer might describe MSBuild as follows:

  • An MSBuild project consists of references to source files and descriptions of actions to take in order to process those source files — these descriptions are called targets.
  • The build process runs in the context of a project configuration (e.g. Debug, Release, etc.) A project may contain any number of configurations.
  • Data associated to source files and the project itself is accessible through properties. MSBuild properties are name-value definitions, specified per configuration (i.e. each configuration has its own set of property definitions).

Properties may apply to the project itself or to a specific file in the project, and can be defined globally or locally:

  • Project scope properties are always global (e.g. the project’s output directory or target file name).
  • Properties applying to source files can be defined globally, in which case the same value will apply to all files (e.g. default compiler warning level is defined globally at level 3).
  • Such a global, file-scope definition may be overridden for a specific file by a locally defined property with the same name (e.g. one of the source files needs to be compiled with warning level 4).
  • Global definitions are stored in the project file or imported from property sheet files.
  • Local property definitions are stored in the project file, within the associated source file references.

The Qt Visual Studio Tools extension integrates with the MSBuild project system by providing a set of Qt-specific targets that describe how to process files (e.g. a moc header) with the appropriate Qt tools.

The current integration has some limitations, with respect to the capabilities of the MSBuild project system:

  • User-managed Qt build settings are copied to project properties on change. Given this one-way synchronization, project properties may become out-of-sync with the corresponding Qt settings.
  • The value of the Qt build settings is the same for all configurations, e.g. the same Qt build and modules will be used, regardless of the selected configuration.
  • It is not possible to override properties in generated files like the meta-object source code output of moc.
  • Qt settings can only be stored in the project file. As such, it is not possible to import Qt definitions from shared property sheets, e.g. a common Qt build shared across several projects.

As discussed above, the solution for these limitations has been to make Qt settings fully fledged project properties. In this way, Qt settings will be guaranteed in-sync with all other properties in the project, and have the possibility of being defined differently for each build configuration. It will be possible to import Qt settings from property sheets, and the property page of Qt tools that generate C++ code, like moc, will now allow overriding compiler properties in generated files.

The post Qt Visual Studio Tools 2.4 RC Released appeared first on Qt Blog.

Interview with Chayse Goodall

Monday 19th of August 2019 08:00:17 AM
Could you tell us something about yourself?

Hi, my name is Chayse Goodall. I am 14 years old. I just draw for fun!

Do you paint professionally, as a hobby artist, or both?

I do animate and draw for a hobby. My artwork improves little by little, but I don’t think it’ll be good enough for a profession!

What genre(s) do you work in?

I don’t know what my style is, but people tell me that I have an anime style.

Whose work inspires you most — who are your role models as an artist?

I have many inspirers, two of which was my art teachers, and one of which was my math tutor. They both motivate me to keep going with my progress.

How and when did you get to try digital painting for the first time?

I tried digital art in 2015. I only had a phone at the time, so it was hard at first. It’s a million times better now, and I have a touch screen computer, so I’ve been trying different art programs!

What makes you choose digital over traditional painting?

There are more tools, and there is an undo button!

How did you find out about Krita?

I was looking up free art programs, and stumbled across this program.

What was your first impression?

My experience was great! It was easy to follow once you get the hang of it!

What do you love about Krita?

All the brushes and how easy it is to save.

What do you think needs improvement in Krita? Is there anything that really annoys you?

It is a bit confusing and it looks complex to newcomers.

What sets Krita apart from the other tools that you use?

It has amazing features and it doesn’t require me to “sign up” or pay for anything!

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I only made one picture. She doesn’t have a name, she was just a test character to see how good the program was. (It came out great!)

What techniques and brushes did you use in it?

I normally draw the sketch first in a dark red color. Then I draw the plain body in a light green. I sketch the clothes, hair, and accessories on in a neon color.

I just use the pen for coloring and shading.

Where can people see more of your work?

My instagram- panstables
My youtube- Nubbins

KDE sprints in summer heat

Sunday 18th of August 2019 08:00:00 PM

End of June I attended the annual Plasma sprint that was this year held in Valencia in conjunction with the Usability sprint. And in July we organised on short notice a KWin sprint in Nuremberg directly following up on the KDE Connect sprint. Let me talk you through some of the highlights and what I concentrated on at these sprints.

Plasma sprint in Valencia

It was great to see many new faces at the Plasma sprint. Most of these new contributors were working on the Plasma and KDE Apps Ui and Ux and we definitely need some new blood in these areas. KDE's Visual Design Group, the VDG, thinned out over the last two years because some leading figures left. But now seeing new talented and motivated people joining as designers and Ux experts I am optimistic that there will be a revival of the golden time of the VDG that brought us Breeze and Plasma 5.

In regards to technical topics there is always a wide field of different challenges and technologies to combine at a Plasma sprint. From my side I wanted to discuss current topics in KWin but of course not everyone at the sprint is directly working on KWin and some topics require deeper technical knowledge about it. Still there were some fruitful discussions, of course in particular with David, who was the second KWin core contributor present besides me.

As a direct product of the sprint my work on dma-buf support in KWin and KWayland can be counted. I started work on that at the sprint mostly because it was a feature requested already for quite a long time by Plasma Mobile developers who need it on some of their devices to get them to work. But this should in general improve in our Wayland session the performance and energy consumption on many devices. Like always such larger features need time so I was not able to finish them at the sprint. But last week I landed them.

Megasprint in Nuremberg

At the Plasma sprint we talked about the current state of KWin and what our future goals should be. I wanted to talk about this some more but the KWin core team was sadly not complete at the Plasma sprint. It was Eike's idea to organize a separate sprint just for KWin and I took the next best opportunity to do this: as part of the KDE Connect and the Onboarding sprints in the SUSE offices in Nuremberg just a few weeks later. Jokingly we called the whole endeavor because of the size of three combined sprints the Megasprint.

KDE Connect sprint

I was there one or two days earlier to also attend the KDE Connect sprint. This was a good idea because the KDE Connect team needs us to provide some additional functionality in our Wayland session.

The first feature they rely on is a clipboard management protocol to read out and manipulate the clipboard via connected devices. This is something we want to have in our Wayland session also in general because without it we can not provide a clipboard history in the Plasma applet. And a clipboard selection would be lost as soon as the client providing it is closed. This can be intentionally but in most cases you expect to at least have simple text fragments still available after the source client quit.

The second feature are fake inputs of keyboard and mouse via other KDE Connect linked devices. In particular fake input via keyboard is tricky. My approach would be to implement the protocols developed by Purism for virtual keyboards and input methods. Implementation of these looks straight forward at first, the tricky part comes in when we look at the current internal keyboard input code in KWayland and KWin: there is not yet support for multiple seats or for one set multiple keyboards at the same time. But this is a prerequisite for virtual keyboards if we want to do it right including the support of different layouts on different keyboards.

KWin sprint

After the official begin of the KWin sprint we went through a long list of topics. As this was the first KWin sprint for years or even forever there was a lot to talk about, starting with small code style issues we needed to agree on till large long-time goals on what our development efforts should concentrate in the future. Also we discussed current topics and one of the bigger ones is for sure my compositing rework.

But in the overall picture this again is only one of several areas we need to put work in. In general it can be said that KWin is a great piece of software with many great features and a good performance but its foundations have become old and in some cases rotten over time. Fixes over fixes have been put in one after the other increasing the complexity and decreasing the overall cohesion. This is normal for actively used software and nothing to criticize but I think we are now at a point in the product life cycle of KWin to either phase it out or put in the hours to rework many areas from the ground up.

I want to put in the hours but on the other side in light of possible regressions with such large changes the question arises if this should be done dissociated with normal KWin releases. There was not yet a decision taken on that.

Upcoming conferences

While the season of sprints for this year is over now there are some important conferences I will attend and if you can manage I invite you to join these as well. No entry fee! In the second week of September the KDE Akademy is held in Milan, Italy. And in the first week of October the X.Org Developer's Conference (XDC) is held in Montreal, Canada. At XDC I have two talks lined up myself: a full length talk about KWin and a lightning talk about a work-in-progress solution by me for multi DPI scaling in XWayland. And if there is time I would like to hold a third one about my ongoing work on auto-list compositing.

In the beginning I planned only to travel to Canada for XDC but just one week later the WineConf 2019 is held close to Montreal, in Toronto, so I might prolong the stay a bit to see how or if at all I as a compositor developer could help the Wine community in achieving their goals. To my knowledge this would be the first time a KWin developer attends WineConf.

KDE Usability & Productivity: Week 84

Sunday 18th of August 2019 06:01:25 AM

Get ready for week 84 in KDE’s Usability & Productivity initiative! 84 weeks is a lot of weeks, and in fact the end is in sight for the U&P initiative. I’d say it’s been a huge success, but all good things must come to an end to make room for new growth! In fact, KDE community members have submitted many new goals, which the community will be able to vote on soon, with the three winners being unveiled at Akademy next month.

But fear not, for the spirit of the Usability & Productivity initiative has suffused the KDE community, and I expect a lot of really cool U&P related stuff to happen even after the initiative has formally ended–including the long-awaited projects of PolicyKit support and mounted Samba and NFS shares in KIO and Dolphin! These projects are making steady progress and I hope to have them done in the next few months, plugging some longstanding holes in our software.

And finally, I’ll be continuing these weekly blog posts for the foreseeable future! It’s just too much fun to stop. Anyway, here’s what we’ve got this week:

New Features Bugfixes & Performance Improvements User Interface Improvements

If you’re getting the sense that KDE’s momentum is accelerating, you’re right. More and more new people are appearing all the time, and I am constantly blown away by their passion and technical abilities. We are truly blessed by… you! This couldn’t happen without the KDE community–both our contributors for making this warp factor 9 level of progress possible, and our users for providing feedback, encouragement, and being the very reason for the project to exist. And of course, the overlap between the two allows for good channels of communication to make sure we’re on the right track.

Many of those users will go on to become contributors, just like I did once. In fact, next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

The Titler Revamp: QML Producer in the making

Sunday 18th of August 2019 04:19:38 AM

The previous month’s been a ride!

At the beginning of this month, I started testing out the new producer as I had a good, rough structure for the producer code, and was only facing a few minor problems. Initially, I was unclear about how exactly the producer is going to be used by the titler so I took a small step back and spent some time figuring out how kdenlivetitle worked, which is the producer in use.

Initially, I faced integration problems (which are the ones you’d normally expect) when I tried to make use of the QmlRenderer library for rendering and loading QML templates – and most of them were resolved by a simple refactoring of the QmlRenderer library source code. To give an example, the producer traditionally stores the QML template in global variables which is taken as a character pointer argument (which is, again, traditional C) The QmlRenderer lib takes a QUrl as its parameters for loading the Qml file, so to solve this problem all I had to do was to overload the loadQml() method with one which could accommodate the producer’s needs – which worked perfectly fine. As a consequence, I also had to compartmentalise (further) the rendering process so now we have 3 methods which go sequentially when we want to render something using the library ( initialiseRenderParams( ) -> prepareRenderer( ) -> renderQml( ) )

(Code: https://github.com/akhilam512/mlt)

At around the beginning of the 2nd week, I resumed testing the producer code. I used melt for this purpose:

melt qml:~/path/to/test.qml

and now, I was faced with a blocker that would keep me for little more than a week – a SEGFAULT

The SIGSEGV signal was from QtOpenGLContext::create( ) -> the method tries to create an OpenGL context for rendering (done while constructing the QmlRenderer class) and the bug was quite weird in itself – initially, I thought it might due to something related to QObject ownership and tried putting a wrapper class (both a wrapper for the renderer class, and a Qt wrapper for the producer wrapper itself – which might sound stupid) around my producer wrapper – and the code still produced a SIGSEV. The next thing I could think of was maybe it is due to OpenGL and I was feeling confident after I found we have had issues with OpenGL context creation and thread management as OpenGL context and OpenGL functions need to be created and called on the same thread

( an excellent reference: https://www.khronos.org/opengl/wiki/Common_Mistakes )

The problem was resolved (thank you JB) finally and it was not due to OpenGL but it was simply because I hadn’t created an QApplication for the producer (which is necessary for qt producers). The whole month’s been a steep curve, definitely not easy, but, I enjoyed it!

Right now, I have a producer which is, now, almost complete and with a little more tweaking, will be put to use, hopefully. I’m still facing a few minor issues which I hope to resolve soon and get a working producer. Once we get that, I can start work on the Kdenlive side. Let’s hope for the best!

 

 

 

 

 

 

 

KDE Frameworks 5.61, Applications 19.08 in FreeBSD

Saturday 17th of August 2019 10:00:00 PM

The KDE community’s large software product releases – Frameworks, Plasma, and Applications – come down the turnpike with distressing regularity. Like clockwork, some might say. We all know Plasma is all about clocks.

Recent releases were KDE Frameworks 5.61 and KDE Applications 19.08. These have both landed in the official FreeBSD ports tree, after Tobias did most of the work and I pushed the big red button.

Your FreeBSD machine will need to be following current ports – not the quarterly release branches, since we don’t backport to those.

All the modern bits have arrived, maintaining the KDE-FreeBSD team’s commitment to up-to-date software for the FreeBSD desktop. The one thing we’re currently lagging on is Qt 5.13. There’s a FreeBSD problem report tracking that update.

Updating from previous

As usual, pkg upgrade should do all the work and get you a fully updated FreeBSD desktop system with KDE Plasma and the whole bit.

I ran into one issue though, related to the simultaneous upgrade of Akonadi and MySQL. I ended up in a situation where Akonadi wouldn’t start because it needed to update the database, and MySQL wouldn’t start because it couldn’t load the database tables.

Since Akonadi functions as a cache to my IMAP server and local maildirs, and I don’t keep any Akonadi-style metadata anywhere, my solution was to move the “old” akonadi dirs aside, and just let it get on with it: that means re-downloading and indexing a lot of mail. For people with truly gigantic collections, this may not be acceptable, so I’m going to suggest, without actually having tried it:

  • ZFS snapshot your system
  • Stop Akonadi
  • Upgrade MySQL
  • Start and stop Akonadi
  • Upgrade Akonadi
  • Start Akonadi again
  • Drop the snapshots

I haven’t seen anyone else in the KDE-FreeBSD support channels mentioning this issue, so it may be an unusual situation.

Missing functionality

The kio-gdrive port doesn’t build. This is an upstream issue, where the API in 19.08 changed but there’s no corresponding kio-gdrive release to chase that API. I can see it’s fixed in git master, just not released.

All the other bits I use daily (konsole, kate, kmail, falkon) work fine.

ownCloud and CryFS

Saturday 17th of August 2019 08:30:38 PM

It is a great idea to encrypt files on client side before uploading them to an ownCloud server if that one is not running in controlled environment, or if one just wants to act defensive and minimize risk.

Some people think it is a great idea to include the functionality in the sync client.

I don’t agree because it combines two very complex topics into one code base and makes the code difficult to maintain. The risk is high to end up with a kind of code base which nobody is able to maintain properly any more. So let’s better avoid that for ownCloud and look for alternatives.

A good way is to use a so called encrypted overlay filesystem and let ownCloud sync the encrypted files. The downside is that you can not use the encrypted files in the web interface because it can not decrypt the files easily. To me, that is not overly important because I want to sync files between different clients, which probably is the most common usecase.

Encrypted overlay filesystems put the encrypted data in one directory called the cipher directory. A decrypted representation of the data is mounted to a different directory, in which the user works.

That is easy to setup and use, and also in principle good to use with file sync software like ownCloud because it does not store the files in one huge container file that needs to be synced if one bit changes as other solutions do.

To use it, the cypher directory must be configured as local sync dir of the client. If a file is changed in the mounted dir, the overlay file system changes the crypto files in the cypher dir. These are synced by the ownCloud client.

One of the solutions I tried is CryFS. It works nicely in general, but is unfortunately very slow together with ownCloud sync.

The reason for that is that CryFS is chunking all files in the cypher dir into 16 kB blocks, which are spread over a set of directories. It is very beneficial because file names and sizes are not reconstructable in the cypher dir, but it hits on one of the weak sides of the ownCloud sync. ownCloud is traditionally a bit slow with many small files spread over many directories. That shows dramatically in a test with CryFS: Adding eleven new files with a overall size of around 45 MB to a CryFS filesystem directory makes the ownCloud client upload for 6:30 minutes.

Adding another four files with a total size of a bit over 1MB results in an upload of 130 files and directories, with an overall size of 1.1 MB.

A typical change use case like changing an existing office text document locally is not that bad. CryFS splits a 8,2 kB big LibreOffice text doc into three 16 kB files in three directories here. When one word gets inserted, CryFS needs to create three new dirs in the cypher dir and uploads four new 16 kB blocks.

My personal conclusion: CryFS is an interesting project. It has a nice integration in the KDE desktop with Plasma Vault. Splitting files into equal sized blocks is good because it does not allow to guess data based on names and sizes. However, for syncing with ownCloud, it is not the best partner.

If there is a way how to improve the situation, I would be eager to learn. Maybe the size of the blocks can be expanded, or the number of directories limited?
Also the upcoming ownCloud sync client version 2.6.0 again has optimizations in the discovery and propagation of changes, I am sure that improves the situation.

Let’s see what other alternatives can be found.

Magnetic Lasso for Krita is here

Saturday 17th of August 2019 10:57:27 AM

I won’t say that I am done with Magnetic Lasso now, but the results are a lot better now to be honest. Take a look at one of the tests that I did,

Kdenlive 19.08 released

Thursday 15th of August 2019 10:05:21 PM

After a well deserved summer break, the Kdenlive community is happy to announce the first major release after the code refactoring. This version comes with a big amount of fixes and nifty new features which will lay the groundwork for the 3 point editing system planned for this cycle. The Project Bin received improvements to the icon view mode and new features were added like the ability to seek while hovering over clips with the mouse cursor and now it is possible to add a whole folder hierarchy. On the usability front the a menu option was added to reset the Kdenlive config file and now you can search for effects from all tabs instead of only the selected tab. Head to our download page for AppImage and Windows packages.

 

Highlights

3 point editing with keyboard shortcuts

With 19.08.0 we added groundwork for full editing with keyboard shortcuts. This will speed up the edit work and you can do editing steps which are not possible or not as quick and easy with the mouse. Working with keyboard shortcuts in 19.08 is different as in the former Kdenlive versions. Mouse operations have not changed and working as before.

3 important points to understand the new concept:

Source (left image):
On the left of the track head the green vertical lines (V1 or A2). The green line is connected to the source clip in the project bin. Only when a clip is selected in the project bin the green line show up depending of the type of the clip (A/V clip, picture/title/color clip, audio clip).

Target (right image):
In the track head the target V1 or A1 is active when it’s yellow. An active target track react to edit operations like insert a clip even if the source is not active (see “Example of advanced edit” here).

The concept is like thinking of connectors:
Connect the source (the clip in the project bin) to a target (a track in the timeline). Only when both connectors on the same track are switched on the clip “flow” from the project bin to the timeline. Be aware: Active target tracks without connected source react on edit operations.

You can find a more detailed introduction in our Toolbox section here.

Adjust AV clips independently with Shift + resize to resize only audio or video part of a clip. Meta/Windows-key + Move in timeline allows to move the audio or video part to another track independently.

 

Press shift while hovering over clips in the Project Bin to seek through them.

 

Adjust the speed of a clip by pressing CTRL + dragging a clip in the timeline.

 

Now you can choose the number of channels and sample rates in the audio capture settings.

 

Other features

  • Added a parameter for steps that allows users to control the separation between keyframes generated by the motion tracker.
  • Re-enable transcode clip functionality.
  • Added a screen selection in the screen grab widget.
  • Add option to sort audio tracks in reverse order.
  • Default fade duration is now configurable from Kdenlive Settings > Misc.
  • Render dialog: add context menu to rendered jobs allowing to add rendered file as a project clip.
  • Renderwidget: Use max number of threads in render.
  • More UI components are translatable.

 

Full list of commits

  • Do not setToolTip() for the same tooltip twice. Commit.
  • Use translations for asset names in the Undo History. Commit.
  • Fix dropping clip in insert/overwrite mode. Commit.
  • Fix timeline drag in overwrite/edit mode. Commit.
  • Fix freeze deleting a group with clips on locked tracks. Commit.
  • Use the translated effect names for effect stack on the timeline. Commit.
  • Fix crash dragging clip in insert mode. Commit.
  • Use the translated transition names in the ‘Properties’ header. Commit.
  • Fix freeze and fade ins allowed to go past last frame. Commit.
  • Fix revert clip speed failing. Commit.
  • Fix revert speed clip reloading incorrectly. Commit.
  • Fix copy/paste of clip with negative speed. Commit.
  • Fix issues on clip reload: slideshow clips broken and title duration reset. Commit.
  • Fix slideshow effects disappearing. Commit.
  • Fix track effect keyframes. Commit.
  • Fix track effects don’t invalidate timeline preview. Commit.
  • Fix effect presets broken on comma locales, clear preset after resetting effect. Commit.
  • Fix crash in extract zone when no track is active. Commit.
  • Fix reverting clip speed modifies in/out. Commit.
  • Fix audio overlay showing up randomly. Commit.
  • Fix Find clip in bin not always scrolling to correct position. Commit.
  • Fix possible crash changing profile when cache job was running. Commit.
  • Fix editing bin clip does not invalidate timeline preview. Commit.
  • Fix audiobalance (MLT doesn’t handle start param as stated). Commit.
  • Fix target track inconsistencies:. Commit.
  • Make the strings in the settings dialog translatable. Commit.
  • Make effect names translatable in menus and in settings panel. Commit.
  • Remember last target track and restore when another clip is selected. Commit.
  • Dont’ process insert when no track active, don’t move cursor if no clip inserted. Commit.
  • Correctly place timeline toolbar after editing toolbars. Commit.
  • Lift/gamma/gain: make it possible to have finer adjustments with Shift modifier. Commit.
  • Fix MLT effects with float param and no xml description. Commit.
  • Cleanup timeline selection: rubber select works again when starting over a clip. Commit.
  • Attempt to fix Windows build. Commit.
  • Various fixes for icon view: Fix long name breaking layout, fix seeking and subclip zone marker. Commit.
  • Fix some bugs in handling of NVidia HWaccel for proxies and timeline preview. Commit.
  • Add 19.08 screenshot to appdata. Commit.
  • Fix bug preventing sequential names when making serveral script renderings from same project. Commit.
  • Fix compilation with cmake < 3.5. Commit.
  • Fix extract frame retrieving wrong frame when clip fps != project fps. Commit. Fixes bug #409927
  • Don’t attempt rendering an empty project. Commit.
  • Fix incorrect source frame size for transform effects. Commit.
  • Improve subclips visual info (display zone over thumbnail), minor cleanup. Commit.
  • Small cleanup of bin preview thumbnails job, automatically fetch 10 thumbs at insert to allow quick preview. Commit.
  • Fix project clips have incorrect length after changing project fps. Commit.
  • Fix inconsistent behavior of advanced timeline operations. Commit.
  • Fix “Find in timeline” option in bin context menu. Commit.
  • Support the new logging category directory with KF 5.59+. Commit.
  • Update active track description. Commit.
  • Use extracted translations to translate asset descriptions. Commit.
  • Fix minor typo. Commit.
  • Make the file filters to be translatable. Commit.
  • Extract messages from transformation XMLs as well. Commit.
  • Don’t attempt to create hover preview for non AV clips. Commit.
  • Add Cache job for bin clip preview. Commit.
  • Preliminary implementation of Bin clip hover seeking (using shift+hover). Commit.
  • Translate assets names. Commit.
  • Some improvments to timeline tooltips. Commit.
  • Reintroduce extract clip zone to cut a clip whithout re-encoding. Commit. See bug #408402
  • Fix typo. Commit.
  • Add basic collision check to speed resize. Commit.
  • Bump MLT dependency to 6.16 for 19.08. Commit.
  • Exit grab mode with Escape key. Commit.
  • Improve main item when grabbing. Commit.
  • Minor improvement to clip grabbing. Commit.
  • Fix incorrect development version. Commit.
  • Make all clips in selection show grab status. Commit.
  • Fix “QFSFileEngine::open: No file name specified” warning. Commit.
  • Don’t initialize a separate Factory on first start. Commit.
  • Set name for track menu button in timeline toolbar. Commit.
  • Pressing Shift while moving an AV clip allows to move video part track independently of audio part. Commit.
  • Ensure audio encoding do not export video. Commit.
  • Add option to sort audio tracks in reverse order. Commit.
  • Warn and try fixing clips that are in timeline but not in bin. Commit.
  • Try to recover a clip if it’s parent id cannot be found in the project bin (use url). Commit. See bug #403867
  • Fix tests. Commit.
  • Default fade duration is now configurable from Kdenlive Settings > Misc. Commit.
  • Minor update for AppImage dependencies. Commit.
  • Change speed clip job: fix overwrite and UI. Commit.
  • Readd proper renaming for change speed clip jobs. Commit.
  • Add whole hierarchy when adding folder. Commit.
  • Fix subclip cannot be renamed. Store them in json and bump document version. Commit.
  • Added audio capture channel & sample rate configuration. Commit.
  • Add screen selection in screen grab widget. Commit.
  • Initial implementation of clip speed change on Ctrl + resize. Commit.
  • Fix FreeBSD compilation. Commit.
  • Render dialog: add context menu to rendered jobs allowing to add rendered file as a project clip. Commit.
  • Ensure automatic compositions are compositing with correct track on project opening. Commit.
  • Fix minor typo. Commit.
  • Add menu option to reset the Kdenlive config file. Commit.
  • Motion tracker: add steps parameter. Patch by Balazs Durakovacs. Commit.
  • Try to make binary-factory mingw happy. Commit.
  • Remove dead code. Commit.
  • Add some missing bits in Appimage build (breeze) and fix some plugins paths. Commit.
  • AppImage: disable OpenCV freetype module. Commit.
  • Docs: Unbreak menus. Commit.
  • Sync Quick Start manual with UserBase. Commit.
  • Fix transcoding crashes caused by old code. Commit.
  • Reenable trancode clip functionality. Commit.
  • Fix broken fadeout. Commit.
  • Small collection of minor improvements. Commit.
  • Search effects from all tabs instead of only the selected tab. Commit.
  • Check whether first project clip matches selected profile by default. Commit.
  • Improve marker tests, add abort testing feature. Commit.
  • Revert “Trying to submit changes through HTTPS”. Commit.
  • AppImafe: define EXT_BUILD_DIR for Opencv contrib. Commit.
  • Fix OpenCV build. Commit.
  • AppImage update: do not build MLT inside dependencies so we can have more frequent updates. Commit.
  • If a timeline operation touches a group and a clip in this group is on a track that should not be affected, break the group. Commit.
  • Add tests for unlimited clips resize. Commit.
  • Small fix in tests. Commit.
  • Renderwidget: Use max number of threads in render. Commit.
  • Don’t allow resizing while dragging. Fixes #134. Commit.
  • Revert “Revert “Merge branch ‘1904’””. Commit.
  • Revert “Merge branch ‘1904’”. Commit.
  • Update master appdata version. Commit.

Cantor 19.08

Thursday 15th of August 2019 09:12:24 PM

Since the last year the development in Cantor is keeping quite a good momentum. After many new features and stabilization work done in the 18.12 release, see this blog post for an overview, we continued to work on improving the application in 19.04. Today the release of KDE Applications 19.08, and with this of Cantor 19.08, was announced. Also in this release we concentrated mostly on improving the usability of Cantor and stabilizing the application. See the ChangeLog file for the full list of changes.

For new features targeting at the usability we want to mention the improved handling of the “backends”. As you know, Cantor serves as the front end to different open-source computer algebra systems and programming languages and requires these backends for the actual computation. The communication with the backends is handled via different plugins that are installed and loaded on demand. In the past, in case a plugin for a specific backend failed to initialize (e.g. because of the backend executable not found, etc.), we didn’t show it in the “Choose a Backend” dialog and the user was completely lost. Now we still don’t allow to create a worksheet for this backend, but we show the entry in the dialog together with a message about why the plugin is disabled. Like as in the example below to check the executable path in the settings:



Similar for cases where the plugin, as compiled and provided by the Linux distribution, doesn’t match to the version of the backend that the user installed manually on the system and asked Cantor to use. Here we clearly inform the user about the version mismatch and also advise what to do:


Having mentioned custom installations of backends and Julia above, in 19.08 we allow to set the custom path to the Julia interpreter similarly to how it is already possible for some other backends.

The handling of Markdown and LaTeX entries became more comfortable. We allow now to quickly switch from the rendered result to the original code via the mouse double click. Switching and back, as usual, via the evaluation of the entry. Furthermore, the results of such rendered Markdown and LaTeX entries are saved now as part of the project. This allows to consume projects with such Markdown and LaTeX entries also on systems having no support for Markdown and LaTeX rendering process. This also decreases the loading times of projects since the ready-to-use results can be directly used.

In 19.08 we added the “Recent Files” menu allowing for a quick access to the recently opened projects:


Among important bug fixes we want to mention fixes that improved the communication with the external processes “hosting” embedded interpreters like Python and Julia. Cantor reacts now much better on errors and crashes produced in those external processes. For Python the interruption of running commands was improved.

While working on 19.08 to make the application more usable and stable we also worked on some bigger new features in parallel. This development is being done as part of Google Summer of Code project and the goal of this project is to add the support of Jupyter notebooks in Cantor. The idea behind this project and its progress are covered in a series of blogs (here, here and here). The code is in a quite good shape and was merged to master already. This is how such a Jupyter notebook looks like in Cantor:

We plan to release this in the upcoming release of KDE Applications 19.12. Stay tuned!

KTouch in KDE Apps 19.08.0

Thursday 15th of August 2019 02:34:54 PM

KTouch, an application to learn and practice touch typing, has received a considerable update with today's release of KDE Apps 19.8.0. It includes a complete redesign by me for the home screen, which is responsible to select the lesson to train on.

The new home screen of KTouch

There is now a new sidebar offering all the courses KTouch has for a total of 34 different keyboard layouts. In previous versions, KTouch presented only the courses matching the current keyboard layout. Now it is much more obvious how to train on different keyboard layouts than the current one.

Other improvements in this release include:

  • Tab focus works now as expected throughout the application and allows training without touching the mouse ever.
  • Access to training statistics for individual lessons from the home screen has been added.
  • KTouch supports now rendering on HiDPI screens.

KTouch 19.08.0 is available on the Snap Store and is coming to your Linux distribution.

Introducing Qt Quick 3D: A high-level 3D API for Qt Quick

Wednesday 14th of August 2019 10:03:06 AM

As Lars mentioned in his Technical Vision for Qt 6 blog post, we have been researching how we could have a deeper integration between 3D and Qt Quick. As a result we have created a new project, called Qt Quick 3D, which provides a high-level API for creating 3D content for user interfaces from Qt Quick. Rather than using an external engine which can lead to animation synchronization issues and several layers of abstraction, we are providing extensions to the Qt Quick Scenegraph for 3D content, and a renderer for those extended scene graph nodes.

Does that mean we wrote yet another 3D Solution for Qt?  Not exactly, because the core spatial renderer is derived from the Qt 3D Studio renderer. This renderer was ported to use Qt for its platform abstraction and refactored to meet Qt project coding style.

“San Miguel” test scene running in Qt Quick 3D

What are our Goals?  Why another 3D Solution? Unified Graphics Story

The single most important goal is that we want to unify our graphics story. Currently we are offering two comprehensive solutions for creating fluid user interfaces, each having its own corresponding tooling.  One of these solutions is Qt Quick, for 2D, the other is Qt 3D Studio, for 3D.  If you limit yourself to using either one or the other, things usually work out quite fine.  However, what we found is that users typically ended up needing to mix and match the two, which leads to many pitfalls both in run-time performance and in developer/designer experience.

Therefore, and for simplicity’s sake, we aim have one runtime (Qt Quick), one common scene graph (Qt Quick Scenegraph), and one design tool (Qt Design Studio).  This should present no compromises in features, performance or the developer/designer experience. This way we do not need to further split our development focus between more products, and we can deliver more features and fixes faster.

Intuitive and Easy to Use API

The next goal is for Qt Quick 3D is to provide an API for defining 3D content, an API that is approachable and usable by developers without the need to understand the finer details of the modern graphics pipeline.  After all, the majority of users do not need to create specialized 3D graphics renderers for each of their applications, but rather just want to show some 3D content, often alongside 2D.  So we have been developing Qt Quick 3D with this perspective in mind.

That being said, we will be exposing more and more of the rendering API over time which will make more advanced use cases, needed by power-users, possible.

At the time of writing of this post we are only providing a QML API, but the goal in the future is to provide a public C++ API as well.

Unified Tooling for Qt Quick

Qt Quick 3D is intended to be the successor to Qt 3D Studio.  For the time being Qt 3D Studio will still continue to be developed, but long-term will be replaced by Qt Quick and Qt Design Studio.

Here we intend to take the best parts of Qt 3D Studio and roll them into Qt Quick and Qt Design Studio.  So rather than needing a separate tool for Qt Quick or 3D, it will be possible to just do both from within Qt Design Studio.  We are working on the details of this now and hope to have a preview of this available soon.

For existing users of Qt 3D Studio, we have been working on a porting tool to convert projects to Qt Quick 3D. More on that later.

First Class Asset Conditioning Pipeline

When dealing with 3D scenes, asset conditioning becomes more important because now there are more types of assets being used, and they tend to be much bigger overall.  So as part of the Qt Quick 3D development effort we have been looking at how we can make it as easy as possible to import your content and bake it into efficient runtime formats for Qt Quick.

For example, at design time you will want to specify the assets you are using based on what your asset creation tools generate (like FBX files from Maya for 3D models, or PSD files from Photoshop for textures), but at runtime you would not want the engine to use those formats.  Instead, you will want to convert the assets into some efficient runtime format, and have them updated each time the source assets change.  We want this to be an automated process as much as possible, and so want to build this into the build system and tooling of Qt.

Cross-platform Performance and Compatibility

Another of our goals is to support multiple native graphics APIs, using the new Rendering Hardware Interface being added to Qt. Currently, Qt Quick 3D only supports rendering using OpenGL, like many other components in Qt. However, in Qt 6 we will be using the QtRHI as our graphics abstraction and there we will be able to support rendering via Vulkan, Metal and Direct3D as well, in addition to OpenGL.

What is Qt Quick 3D? (and what it is not)

Qt Quick 3D is not a replacement for Qt 3D, but rather an extension of Qt Quick’s functionality to render 3D content using a high-level API.

Here is what a very simple project with some helpful comments looks like:

import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick3D 1.0 Window { id: window visible: true width: 1280 height: 720 // Viewport for 3D content View3D { id: view anchors.fill: parent // Scene to view Node { id: scene Light { id: directionalLight } Camera { id: camera // It's important that your camera is not inside // your model so move it back along the z axis // The Camera is implicitly facing up the z axis, // so we should be looking towards (0, 0, 0) z: -600 } Model { id: cubeModel // #Cube is one of the "built-in" primitive meshes // Other Options are: // #Cone, #Sphere, #Cylinder, #Rectangle source: "#Cube" // When using a Model, it is not enough to have a // mesh source (ie "#Cube") // You also need to define what material to shade // the mesh with. A Model can be built up of // multiple sub-meshes, so each mesh needs its own // material. Materials are defined in an array, // and order reflects which mesh to shade // All of the default primitive meshes contain one // sub-mesh, so you only need 1 material. materials: [ DefaultMaterial { // We are using the DefaultMaterial which // dynamically generates a shader based on what // properties are set. This means you don't // need to write any shader code yourself. // In this case we just want the cube to have // a red diffuse color. id: cubeMaterial diffuseColor: "red" } ] } } } }

The idea is that defining 3D content should be as easy as 2D.  There are a few extra things you need, like the concepts of Lights, Cameras, and Materials, but all of these are high-level scene concepts, rather than implementation details of the graphics pipeline.

This simple API comes at the cost of less power, of course.  While it may be possible to customize materials and the content of the scene, it is not possible to completely customize how the scene is rendered, unlike in Qt 3D via the its customizable framegraph.  Instead, for now there is a fixed forward renderer, and you can define with properties in the scene how things are rendered.  This is like other existing engines, which typically have a few possible rendering pipelines to choose from, and those then render the logical scene.

A Camera orbiting around a Car Model in a Skybox with Axis and Gridlines (note: stutter is from the 12 FPS GIF )

What Can You Do with Qt Quick 3D?

Well, it can do many things, but these are built up using the following scene primitives:

Node

Node is the base component for any node in the 3D scene.  It represents a transformation in 3D space, and but is non-visual.  It works similarly to how the Item type works in Qt Quick.

Camera

Camera represents how a scene is projected to a 2D surface. A camera has a position in 3D space (as it is a Node subclass) and a projection.  To render a scene, you need to have at least one Camera.

Light

The Light component defines a source of lighting in the scene, at least for materials that consider lighting.  Right now, there are 3 types of lights: Directional (default), Point and Area.

Model

The Model component is the one visual component in the scene.  It represents a combination of geometry (from a mesh) and one or more materials.

The source property of the Mesh component expects a .mesh file, which is the runtime format used by Qt Quick 3D.  To get mesh files, you need to convert 3D models using the asset import tool.  There are also a few built-in primitives. These can be used by setting the following values to the source property: #Cube, #Cylinder, #Sphere, #Cone, or #Rectangle.

We will also be adding a programmatic way to define your own geometry at runtime, but that is not yet available in the preview.

Before a Model can be rendered, it must also have a Material. This defines how the mesh is shaded.

DefaultMaterial and Custom Materials

The DefaultMaterial component is an easy to use, built-in material.  All you need to do is to create this material, set the properties you want to define, and under the hood all necessary shader code will be automatically generated for you.  All the other properties you set on the scene are taken into consideration as well. There is no need to write any graphics shader code (such as, vertex or fragment shaders) yourself.

It is also possible to define so-called CustomMaterials, where you do provide your own shader code.  We also provide a library of pre-defined CustomMaterials you can try out by just adding the following to your QML imports:

import QtQuick3D.MaterialLibrary 1.0

Texture

The Texture component represents a texture in the 3D scene, as well as how it is mapped to a mesh.  The source for a texture can either be an image file, or a QML Component.

A Sample of the Features Available 3D Views inside of Qt Quick

To view 3D content inside of Qt Quick, it is necessary to flatten it to a 2D surface.  To do this, you use the View3D component.  View3D is the only QQuickItem-based component in the whole API.  You can either define the scene as a child of the View3D or reference an existing scene by setting the scene property to the root Node of the scene you want to renderer.

If you have more than one camera, you can also set which camera you want to use to render the scene.  By default, it will just use the first active camera defined in the scene.

Also it is worth noting that View3D items do not necessarily need to be rendered to off-screen textures before being rendered.  It is possible to set one of the 4 following render modes to define when the 3D content is rendered:

  1. Texture: View3D is a Qt Quick texture provider and renders content to an texture via an FBO
  2. Underlay: View3D is rendered before Qt Quick’s 2D content is rendered, directly to the window (3D is always under 2D)
  3. Overlay: View3D is rendered after Qt Quick’s 2D content is rendered, directly to the window (3D is always over 2D)
  4. RenderNode: View3D is rendered in-line with the Qt Quick 2D content.  This can however lead to some quirks due to how Qt Quick 2D uses the depth buffer in Qt 5.

 

2D Views inside of 3D

It could be that you also want to render Qt Quick content inside of a 3D scene.  To do so, anywhere where an Texture is taken as a property value (for example, in the diffuseMap property of default material), you can use a Texture with its sourceItem property set, instead of just specifying a file in the source property. This way the referenced Qt Quick item will be automatically rendered and used as a texture.

The diffuse color textures being mapped to the cubes are animated Qt Quick 2D items.

3D QML Components

Due to Qt Quick 3D being built on QML, it is possible to create reusable components for 3D as well.  For example, if you create a Car model consisting of several Models, just save it to Car.qml. You can then instantiate multiple instance of Car by just reusing it, like any other QML type. This is very important because this way 2D and 3D scenes can be created using the same component model, instead of having to deal with different approaches for the 2D and 3D scenes.

Multiple Views of the Same Scene

Because scene definitions can exist anywhere in a Qt Quick project, its possible to reference them from multiple View3Ds.  If you had multiple cameras in a scene, you could even render from each one to a different View3D.

4 views of the same Teapot scene. Also changing between 3 Cameras in the Perspective view.

Shadows

Any Light component can specify that it is casting shadows.  When this is enabled, shadows are automatically rendered in the scene.  Depending on what you are doing though, rendering shadows can be quite expensive, so you can fine-tune which Model components cast and receive shadows by setting additional properties on the Model.

Image Based Lighting

In addition to the standard Light components, its possible to light your scene by defining a HDRI map. This Texture can be set either for the whole View3D in its SceneEnvironment property, or on individual Materials.

Animations

Animations in Qt Quick 3D use the same animation system as Qt Quick.  You can bind any property to an animator and it will be animated and updated as expected. Using the QtQuickTimeline module it is also possible to use keyframe-based animations.

Like the component model, this is another important step in reducing the gap between 2D and 3D scenes, as no separate, potentially conflicting animation systems are used here.

Currently there is no support for rigged animations, but that is planned in the future.

How Can You Try it Out?

The intention is to release Qt Quick 3D as a technical preview along with the release of Qt 5.14.  In the meantime it should be possible to use it already now, against Qt 5.12 and higher.

To get the code, you just need to build the QtQuick3D module which is located here:

https://git.qt.io/annichol/qtquick3d

What About Tooling?

The goal is that it should be possible via Qt Design Studio to do everything you need to set up a 3D scene. That means being able to visually lay out the scene, import 3D assets like meshes, materials, and textures, and convert those assets into efficient runtime formats used by the engine.

A demonstration of early Qt Design Studio integration for Qt Quick 3D

Importing 3D Scenes to QML Components

Qt Quick 3D can also be used by writing QML code manually. Therefore, we also have some stand-alone utilities for converting assets.  Once such tool is the balsam asset conditioning tool.  Right now it is possible to feed this utility an asset from a 3D asset creation tool like Blender, Maya, or 3DS Max, and it will generate a QML component representing the scene, as well as any textures, meshes, and materials it uses.  Currently this tool supports generating scenes from the following formats:

  • FBX
  • Collada (dae)
  • OBJ
  • Blender (blend)
  • GLTF2

To convert the file myTestScene.fbx you would run:

./balsam -o ~/exportDirectory myTestScene.fbx

Thiswould generate a file called MyTestScene.qml together with any assets needed. Then you can just use it like any other Component in your scene:

import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick3D 1.0 Window { width: 1920 height: 1080 visible: true color: "black" Node { id: sceneRoot Light { } Camera { z: -100 } MyTestScene { } } View3D { anchors.fill: parent scene: sceneRoot } }

We are working to improve the assets generated by this tool, so expect improvements in the coming months.

Converting Qt 3D Studio Projects

In addition to being able to generate 3D QML components from 3D asset creation tools, we have also created a plugin for our asset import tool to convert existing Qt 3D Studio projects.  If you have used Qt 3D Studio before, you will know it generates projects in XML format to define the scene.  If you give the balsam tool a UIP or UIA project generated by Qt 3D Studio, it will also generate a Qt Quick 3D project based on that.  Note however that since the runtime used by Qt 3D Studio is different from Qt Quick 3D, not everything will be converted. It should nonetheless give a good approximation or starting point for converting an existing project.  We hope to continue improving support for this path to smooth the transition for existing Qt 3D Studio users.

Qt 3D Studio example application ported using Qt Quick 3D’s import tool. (it’s not perfect yet)

What About Qt 3D?

The first question I expect to get is why not just use Qt 3D?  This is the same question we have been exploring the last couple of years.

One natural assumption is that we could just build all of Qt Quick on top of Qt 3D if we want to mix 2D and 3D.  We intended to and started to do this with the 2.3 release of Qt 3D Studio.  Qt 3D’s powerful API provided a good abstraction for implementing a rendering engine to re-create the behavior expected by Qt Quick and Qt 3D Studio. However, Qt 3D’s architecture makes it difficult to get the performance we needed on an entry level embedded hardware. Qt 3D also comes with a certain overhead from its own limited runtime as well as from being yet another level of abstraction between Qt Quick and the graphics hardware.  In its current form, Qt 3D is not ideal to build on if we want to reach a fully unified graphics story while ensuring continued good support for a wide variety of platforms and devices ranging from low to high end.

At the same time, we already had a rendering engine in Qt 3D Studio that did exactly what we needed, and was a good basis for building additional functionally.  This comes with the downside that we no longer have the powerful APIs that come with Qt 3D, but in practice once you start building a runtime on top of Qt 3D, you already end up making decisions about how things should work, leading to a limited ability to customize the framegraph anyway. In the end the most practical decision was to use the existing Qt 3D Studio rendering engine as our base, and build off of that.

What is the Plan Moving Forward?

This release is just a preview of what is to come.  The plan is to provide Qt Quick 3D as a fully supported module along with the Qt 5.15 LTS.  In the meantime we are working on further developing Qt Quick 3D for release as a Tech Preview with Qt 5.14.

For the Qt 5 series we are limited in how deeply we can combine 2D and 3D because of binary compatibility promises.  With the release of Qt 6 we are planning an even deeper integration of Qt Quick 3D into Qt Quick to provide an even smoother experience.

The goal here is that we want to be able to be as efficient as possible when mixing 2D and 3D content, without introducing any additional overhead to users who do not use any 3D content at all.  We will not be doing anything drastic like forcing all Qt Quick apps to go through the new renderer, only ones who are mixing 2D and 3D.

In Qt 6 we will also be using the Qt Rendering Hardware Interface to render Qt Quick (including 3D) scenes which should eliminate many of the current issues we have today with deployment of OpenGL applications (by using DirectX on Windows, Metal on macOS, etc.).

We also want to make it possible for end users to use the C++ Rendering API we have created more generically, without Qt Quick.  The code is there now as private API, but we are waiting until the Qt 6 time-frame (and the RHI porting) before we make the compatibility promises that come with public APIs.

Feedback is Very Welcome!

This is a tech preview, so much of what you see now is subject to change.  For example, the API is a bit rough around the edges now, so we would like to know what we are missing, what doesn’t make sense, what works, and what doesn’t. The best way to provide this feedback is through the Qt Bug Tracker.  Just remember to use the Qt Quick: 3D component when filing your bugs/suggestions.

The post Introducing Qt Quick 3D: A high-level 3D API for Qt Quick appeared first on Qt Blog.

Krita Sprint News <2019-08-14 Wed>

Wednesday 14th of August 2019 06:32:00 AM

I am in the Netherlands I came for the Krita Sprint and I have done a lot of progress with my Animated Brush for the Google Summer of Code Read More...

Lazy Qt Models from QVariant

Tuesday 13th of August 2019 10:00:00 PM

In Calamares there is a debug window; it shows some panes of information and one of them is a tree view of some internal data in the application. The data itself isn’t stored as a model though, it is stored in one big QVariantMap. So to display that map as a tree, the code needs to provide a Qt model so that then the regular Qt views can do their thing.

Each key in the map is a node in the tree to be shown; if the value is a map or a list, then sub-nodes are created for the items in the map or the list, and otherwise it’s a leaf that displays the string associated with the key. In the screenshot you can see the branding key which is a map, and that map contains a bunch of string values.

Historically, the way this map was presented as a model was as follows:

  • A JSON document representing the contents of the map is made,
  • The JSON document is rendered to text,
  • A model is created from the JSON text using dridk’s QJsonModel,
  • That model is displayed.

This struck me as a long-way-around. Even if there’s only a few dozen items overall in the tree, it looks like a lot of copying and buffer management going on. The code where all this happens, though, is only a few lines – it looks harmless enough.

I decided that I wanted to re-do this bit of code – dropping the third-party code in the process, and so simplifying Calamares a little – by using the data from the QVariant directly, with only a “light weight” amount of extra data. If I was smart, I would consult more closely with Marek Krajewski’s Hands-On High Performance Programming with Qt 5, but .. this was a case of “I feel this is more efficient” more than doing the smart thing.

I give you VariantModel.

This is strongly oriented towards the key-value display of a QVariantMap as a tree, but it could possibly be massaged into another form. It also is pushy in smashing everything into string form. It could probably use data from the map more directly (e.g. pixmaps) and be even more fancy that way.

Most of my software development is pretty “plain”. It is straightforward code. This was one of the rare occasions that I took out pencil and paper and sketched a data structure before coding (or more accurate: I did a bunch of hacking, got nowhere, and realised I’d have to do some thinking before I’d get anywhere – cue tea and chocolate).

What I ended up with was a QVector of quintptrs (since a QModelIndex can use that quintptr as intenal data). The length of the vector is equal to the number of nodes in the tree, each node is assigned an index in the tree (I used depth-first traversal along whatever arbitrary yet consistent order Qt gives me the keys, enumerating each node as it is encountered). In the vector, I store the parent index of each node, at the index of the node itself. The root is index 0, and has a special parent.

The image shows how a tree with nine nodes can be enumerated into a vector, and then how the vector is populated with parents. The root gets index 0, with a special parent. The first child of the root gets index 1, parent 0. The first child of that node gets index 2, parent 1; since it is a leaf node, its sibling gets index 3, parent 1 .. the whole list of nine parents looks like this:

-1, 0, 1, 1, 0, 0, 5, 5, 5

For QModelIndex purposes, this vector of numbers lets us do two things:

  • the number of children of node n is the number of entries in this vector with n as parent (e.g. a simple QVector::count()).
  • given a node n, we can find out its parent node (it’s at index n in the vector) but also which row it occupies (in QModelIndex terms), by counting how many other nodes have the same parent that occur before it.

In order to get the data from the QVariant, we have to walk the tree, which requires a bunch of parent lookups and recursively descending though the tree once the parents are all found.

Changing the underlying map isn’t immediately fatal, but changing the number of nodes (expecially in intermediate levels) will do very strange things. There is a reload() method to re-build the list and parents indexes if the underlying data changes – in that sense it’s not a very robust model. It might make sense to memoize the data as well while walking the tree – again, I need to read more of Marek’s work.

I’m kinda pleased with the way it turned out; the consumers of the model have become slightly simpler, even if the actual model code (from QJsonModel to VariantModel) isn’t much smaller. There’s a couple of places in Calamares that might benefit from this model besides the debug window, so it is likely to get some revisions as I use it more.

Krita Sprint 2019

Tuesday 13th of August 2019 05:12:43 PM

So, we had a Krita sprint last week, a gathering of contributors of Krita. I’ve been at all sprints since 2015, which was roughly the year I became a Krita contributor. This is in part because I don’t have to go abroad, but also because I tend to do a lot of administrative side things.

This sprint was interesting in that it was an attempt to have more if not as much artists as developers there. The idea being that the previous sprint was very much focused on bugfixing and getting new contributors familiar with the code base(we fixed 40 bugs back then), this sprint would be more about investigating workflow issues, figuring out future goals, and general non-technical things like how to help people, how to engage people, how to make people feel part of the community.

Unfortunately, it seems I am not really built for sprints. I was already somewhat tired when I arrived, and was eventually only able to do half days most of the time because there were just too many people …

So, what did I do this sprint?

Investigate LibSai (Tuesday)

So, PaintTool Sai is a 2d painting program with a simple interface that was the hottest thing around 2006 or so, because at the time, you had PaintShop Pro(a good image editing program, but otherwise…), Photoshop CS2(You could paint with this but it was rather clunky), GIMP, OpenCanvas(very weird interface) and a bunch of natural media simulating programs (Corel, very buggy, and some others I don’t remember). Paint Tool Sai was special in that it had a stablizer, and mirroring/rotating the viewport, and a color-mixing brush, and variable width vector curves, and a couple of cool dockers. Mind you it only had like, 3 filters, but all those other things were HUGE back then. So everyone and their grandmother pirated it until the author actually made an English version, at which point like 90% of people still pirated it. Then the author proceeded to not update it for like… 8 years?, with Paint Tool Sai 2 being in beta for a small ever.

The lowdown is that nowadays many people(mostly teens, so I don’t really want to judge them too much) are still using Paint Tool Sai 1, pirated, and it’s so old it won’t work on windows 10 computers anymore. One of the things that always had bothered me is that there was no program outside of Sai that could handle opening the Sai file format. It was such a popular program, yet noone had seemed to have tried?

So, it seems someone has tried, made a library out of it even. If you look at the readme, the reason noone besides Wunkolo has tried to support it is because sai files aren’t just encoded, no, they’re encrypted. This would be fine if it were video game saves, but a painting program is not a video game, and I was slightly horrified, as it is a technological mechanism put in place to avoid people getting to their artwork that they made rather than just the most efficient way to store it on the computer. So I now feel more compelled to have Krita be able to open these files, so people can actually access them. So I sat down with Boudewijn to figure out how much needs to be done to add it to Krita, and we got some build errors, so we’re delaying this for a bit. Made a phabricator task instead:

T11330 – Implement LibSai for importing PaintTool Sai files

Pressure Calibration Widget (Tuesday)

This was a slightly selfish thing. I had been having trouble with my pen deciding it had a different pressure curve every so often, and adjusting the global tablet curve was, while perfectly possible, getting a bit annoying. I had seen Pressure Calibration widgets in some android programs, and I figured that the way how I tried to figure out my pressure curve(messing with the tablet tester and checking the values) is a little bit too technical for people, so I decided to gather up all my focus and program a little widget that would help with that.

Right now, it asks for a soft stroke, a medium one and a heavy one and then calculates the the desired pressure from that. It’s a bit fiddly though, and I want to make it less fiddly but still friendly in how it guides you to provide the values it needs.

MR 104 – Initial Prototype Pressure Callibration Widget

HDR master class (Tuesday)

(Not really)

So, the sprint was also the first time to test Krita on exciting new hardware. One of these was Krita on an Android device(more on that later), the other was the big HDR setup provided by Intel. I had already played with it back in January, and have since the beginning of Krita’s LUT docker support played with making HDR/Scene Linear images in Krita. Thus when Raghu started painting, I ended up pointing at things to use and explaining some peculiarities(HDR is not just bright, but also wide gamut, and linear, so you are really painting with white).

Then Stefan joined in, and started asking questions, and I had to start my talk again. Then later that day Dmitry bothered David till he tried it out, and I explained everything again!

Generally, you don’t need an HDR setup to paint HDR images, but it does require a good idea of how to use the color management systems and wrapping your head around it. It seems that the latter was a really big hurdle, because artists who had come across as scared of it over IRC were, now that they could see the wide gamut colors, a lot more positive about it.

Animation cycles were shown off as well, but I had run out of juice too much to really appreciate it.

Later that evening we went to the Italian Restaurant on the corner, who miraculously made ordering á la carte work for a group of 25 people. I ended up translating the whole menu for Tusooaa and Tiar, who ended up repaying me by making fun of my habit of pulling apart the nougat candy I got with the coffee. *shakes fist in a not terribly serious way* There were later during the walk also discussions had about burning out and mental health, a rather relevant topic for freelancers.

Open Lucht Museum Arnhem (Wednesday)

I did make a windmill, but… I really should’ve stayed in Deventer and catch up on sleep. Being Dutch, this was my 5th or 7th time I had seen this particular open air museum(there’s another one(Bokrijk) in Belgium where I have been just as many times), but I had wanted to see how other people would react to it. In part because it shows all this old Dutch architecture, and in part because these are actual houses from those periods, they get carefully disassembled and then carefully rebuild brick by brick, beam by beam on the museum terrain, which in itself is quite special.

But all I could think when there was ‘oh man, I want to sleep’. Next time I just need to stay in Deventer, I guess.

I did get to take a peek at other people’s sketchbooks and see, to my satisfaction, I am not the only person to just write notes and todo lists in the sketchbook as well.

At dinner, we mostly talked about the weather, and confusion over the lack of spicyness in the other wise spicy cuisine of Indonesia(which was later revealed to be caused by the waiters not understanding we had wanted a mix of spicy and non-spicy things). And also that box-beds are really weird.

Taking Notes (Thursday Afternoon)

Given the bad decision I had made yesterday to go to the museum, I decided to be less dumb and tell everyone I’d sleep during the morning.

And then when I joined everyone, it turned there had been half a meeting during the morning. And Hellozee was kind of hinting that I should definitely take the notes for the afternoon(looking at his notes and his own description of that day, taking notes had been a bit too intense for him). And later he ranted at me about the text tool, and I told him that ‘Don’t worry, we know it is clunky as hell. Identifying that isn’t what is necessary to fix it’. (We have several problems, the first being the actual font stack itself, so we can have shaping for scripts like those for Arabic and Hindi, then there’s the laying out, so we can have wordwrap and vertical layout for CJK scripts, and only after those are solved we can even start thinking of improving the UI).

Anyhow, the second part of the meeting was about instagram, marketing, and just having a bit of fun with getting people to show off their art they made with Krita. The thing is of course that if you want it to be a little bit of fun, you need to be very thorough in how you handle it. Like, competition could lead to it feeling like a dog-eat-dog style competition, and we also need to make it really clear how to deal with the usual ethics around artists and ‘working for exposure’. Sara Tepes was the one who wants to start up the Krita account for Instagram, which I am really thankful of. She also began with the discussion on this, and I feel a little bad because I pointed out the ethical aspect by making fun of ‘working for exposure’, and I later realized that she was new to the sprint, and maybe that had been a bit too forward.

And then I didn’t get the chance to talk to her afterwards, so I couldn’t apologize. I did get some comments from others that they were glad I brought it up, but still it could’ve been nicer. orz

In the end we came to a compromise that people seemed comfortable with: A cycle of several images, selected from things people explicitly tag to be included on social media, for a short period, and a general page that explains how the whole process works so that there’s no confusion on what and why and how, and that it can just be a fun thing for people to do.

Android Testing (Friday)

I had poked at the android version, but last time it had not yet have graphics acceleration support, so it was really slow. This time I could really sit down and test it. It’s definitely gotten a lot better, and I can see myself or other artists having this as an alternative to a sketchbook to sit down and doodle on while the computer is for the bigger intensive work.

It was also a little funny, when I showed to someone it was pressure sensitive, all the other artists present one-by-one walked over to me to try poke the screen with a stylus. I guess we generally have so much trouble to get pressure to work on desktop devices it’s a little unbelievable it would just work on the mobile device.

T11355 – General feedback Android version.

That evening discussions were mostly about language, and photoshop’s magnetic lasso tool crashing, and that Europeans talk about language a lot.

Saturday

On Saturday I read an academic book of 300~ pages, something which I had really needed after all this. I felt a lot more clear headed after wards. I had attempted to help Boudewijn with bugtriaging, which is something we usually do on a sprint, but I just couldn’t concentrate.

We were all too tired to talk much on Saturday. I can only remember eating.

Sunday

On Sunday I spent some time with Boudewijn going through the meeting notes and turning it into the sprint report. Boudewijn then spend 5 times trying to explain the current release schedule to me, and now I have my automated mails setup so people get warned about backporting their fixes and about the upcoming monthly release schedule.

In the evening I read through the Animator’s Survival Kit. We have a bit of an issue where it seems Krita’s animation tools are so intuitive that when it comes to the unintuitive things that are inherent to big projects themselves (ram usage, planning, pipeline), people get utterly confused.

We’ve already been doing a lot of things in that area: making it more obvious when you are running out of ram, making the render dialog a bit better, making onion skins a bit more guiding. But now I am also rewriting the animation page and trying to convey to aspiring animators that they cannot do a one hour 60 fps film in a single krita file, and that they will need to do things like planning. The Animator’s Survival Kit is a book that’s largely about planning, which very little talked about, so hence why it is suggested to aspiring animators a lot, and I was reading it through to make sure I wasn’t about to suggest nonsense.

We had, after all the Indians had left, gone to an Indian restaurant. Discussions were about spicy food, language and Europe.

Monday

On Monday I stuck around for the irc meeting and afterwards went home.

It was lovely to meet everyone individually, and each singular conversation I had, had been lovely, but this is really one of those situations where I really need to learn to take more breaks and not be too angry at myself for that. I hope to meet everyone in the future again in a less crowded setting so I can actually have all the fun of meeting fellow contributors and none of the exhausting parts. The todo list we’ve accumulated is a bit daunting, but hopefully we’ll get through it together.

Krita 2019 Sprint: Animation and Workflow BoF

Tuesday 13th of August 2019 02:00:15 PM
Last week we had a huge Krita Sprint in Deventer. A detailed report is written by Boudewijn here, and I will concentrate on the Animation and Workflow discussion we had on Tuesday, when Boudewijn was away, meeting and managing people arriving. The discussion was centered around Steven and his workflow, but other people joined during the discussion: Noemie, Scott, Raghavendra and Jouni.
(Eternal) Eraser problemSteven brought up a point that current brush options "Eraser Switch Size" and "Eraser switch Opacity" are buggy, so it winded up an old topic again. These options were always considered as a workaround for people who need a distinct eraser tool/brush tip, and they were always difficult to maintain.

After a long discussion with broader circle of people we concluded that "Ten Brushes Plugin" can be used as an alternative for a separate eraser tool. One should just assign some eraser-behaving preset to the 'E' key using this plugin. So we decided that we need the following steps:
Proposed solution:
  1. Ten Brushes Plugin should have some eraser preset configured by default
  2. This eraser preset should be assigned to "Shift+E" by default. So when people ask about "Eraser Tool" we could just tell them "please use Shift+E".
  3. [BUG] Ten brushes plugin doesn't reset back to a normal brush when the user picks/changes painting color, like normal eraser mode does.
  4. [BUG] Brush slot numbering is done in 1,2,3,...,0 order, which is not obvious. It should be 0,1,2,...,9 instead.
  5. [BUG] It is not possible to set up a shortcut to the brush preset right in the Ten Brushes Plugin itself. The user should go to the settings dialog.
Stabilizer workflow issuesIn Krita stabilizer settings are global. That is, they are applied to whatever brush preset you use at the moment. That is very inconvenient, e.g. when you do precise line art. If you switch to a big eraser to fix up the line, you don't need the same stabilization as in the liner. Proposed solution:
  1. The stabilizer setting are still in the Tool Options docker, we don't move them into the Brush Settings (because sometimes you need to make them global?)
  2. Brush Preset should have a checkbox "Save Stabilizer Settings" that will load/save the stabilizer settings when the preset is selected/unselected.
  3. The editing of these (basically) brush-based setting will happen in the tool option.
Questions:
  • I'm not sure if the last point is sane. Technically, we can move the stabilizer settings into the brush preset. And if the user wants to use the same stabilizer settings in different presets, he can just lock the corresponding brush settings (we have an special lock icon for that). So should we move the stabilizer settings into the brush preset editor or keep it in the tool options?
Cut Brush featureSometimes painter need a lot of stamps for often-used objects. E.g. a head or a leg for an animation character. A lot of painters use brush preset selector as a storage for that. That is, if you need a copy of a head on another frame, just select the preset and click in a proper position. We already have stamp brushes and they work quite well, we just need streamline workflow a bit.Proposed solution:
  1. Add a shortcut for converting the current selection into a brush. It should in particular:
    • create a brush from the current selection, add default name to it and create an icon from the selection itself
    • deselect the current selection. It is needed to ensure that the user can paint right after pressing this shortcut
  2. There should be shortcuts to rotate, scale current brush
  3. There should be a shortcut for switching prev/next dab of the animated brush
  4. Brush needs a special outline mode, when it paints not an outline, but a full colorful preview. It should be activated by some modifier (that is pres+hold).
  5. Ideally, if multiple frames are selected, the created brush should become animated. That would allow people to create "walking brush" or "raining brush".
Multiframe editing modeOne of the major things Krita's animation still lack is multiframe editing mode, that is ability to transform/edit multiple frames at once. We discussed it and ended up with a list of requirements.Proposed solution:
  1. By default all the editing tools transform the current frame only
  2. The only exception is "Image" operations, which operate on the entire image, e.g. scale, rotate, change color space. These operations work on all existing frames.
  3. If there is more than one frame selected in the timeline, then operation/tool should be applied on these frames only.
  4. We need a shortcut/action in the frame's (or timeline's layer) context menu: "Selection all frames"
  5. Tools/Actions that should support multiframe operations:
    • Brush Tool (low-priority)
    • Move Tool
    • Transform Tool
    • Fill Tool (may be efficiently used on multiple frames with erase-mode-trick)
    • Filters
    • Copy-Paste selection (now we can only copy-paste frames, not selections)
    • Fill with Color/Pattern/Clear
BUGSThere is also a set of unsorted bugs that we found out during the discussion:
  1. On Windows multiple main windows don't have unique identifier, so they are no distinguishable from OBS.
  2. Animated brush spits a lot of dabs in the beginning of the stroke
  3. Show in Timeline should be default for all the new layers
  4. Fill Tool is broken with Onion Skins (BUG:405753)
  5. Transform Tool is broken with Onion Skins (BUG:408152)
  6. Move Tool is broken with Onion Skins (BUG:392557)
  7. When copy-paste frames on the timeline, in-betweens should override the destination (and technically remove everything that was in the destination position). Right now source and destination keyframes are merged. That is not what animators expect.
  8. Changing "End" of animation in "Animation" docker doesn't update timeline's scroll area. You need to create a new layer to update it.
  9. Delayed Save dialog doesn't show the name of the stroke that delays it (and sometimes the progress bar as well). It used to work, but now is broken.
  10. [WISH] We need "Insert pasted frames", which will not override destination, but just offset it to the right.
  11. [WISH] Filters need better progress reporting
  12. [WISH] Auto-change the background of the Text Edit Dialog, when the text color looks alike.

As a conclusion, it was very nice to be at the sprint and to be able to talk to real painters! Face to face meetings are really important for getting such detailed lists of new features we need to implement. If we did this discussion through Phabricator we would spend weeks on it :)

KDE.org Applications Site

Tuesday 13th of August 2019 02:00:05 PM

I’ve updated the kde.org/applications site so KDE now has web pages and lists the applications we produce.

In the update this week it’s gained Console apps and Addons.

Some exciting console apps we have include Clazy, kdesrc-build, KDebug Settings (a GUI app but has no menu entry) and KDialog (another GUI app but called from the command line).

This KDialog example takes on a whole new meaning after watching the Chernobyl telly drama.

And for addon projects we have stuff like File Stash, Latte Dock and KDevelop’s addons for PHP and Python.

At KDE we want to be a great place to be a home for your project and this is an important part of that.

 

KDevelop 5.4.1 released

Monday 12th of August 2019 08:00:00 PM

KDevelop 5.4.1 released

We today provide a stabilization and bugfix release with version 5.4.1. This is a bugfix-only release, which introduces no new features and as such is a safe and recommended update for everyone currently using KDevelop 5.4.0.

You can find the updated Linux AppImage as well as the source code archives on our download page.

ChangeLog kdevelop
  • Fix crash: add missing Q_INTERFACES to OktetaDocument for IDocument. (commit. fixes bug #410820)
  • Shell: do not show bogus error about repo urls on DnD of normal files. (commit)
  • [Grepview] Use the correct icons. (commit)
  • Fix calculation of commit age in annotation side bar for < 1 year. (commit)
  • Appdata: add entry. (commit)
  • Fix registry path inside kdevelop-msvc.bat. (commit)
kdev-python

No user-relevant changes.

kdev-php
  • Update phpfunctions.php to phpdoc revision 347831. (commit)
  • Switch few http URLs to https. (commit)
kossebau Mon, 2019/08/12 - 22:00 Category News Tags release

More in Tux Machines

Five reasons Chromebooks are better than Windows laptops

Today, Windows users hold off for as long as possible before "updating" their PCs. Chrome OS users, on the other hand, have their systems updated every six weeks without a hitch. And, I might add, these updates take a minute or two instead of an hour or two. Chrome OS is also more secure than Windows. WIndows security violations pop up every blessed month. Sure, Chrome OS has had security holes, but I can't think of one that's been significantly exploited. Want a nightmare? Try migrating from an old Windows PC to a new one. Even if you're jumping from Windows 10 to Windows 10, there are no easy ways to do it. If you have a Microsoft account, rather than a local account, you must manually move your local files from third-party programs such as Photoshop On Chrome OS, you log in to your new Chromebook and -- ta-da! -- you're back in business. No fuss, no muss. Read more

Programming: Joget Operator, Python, LibreOffice, GNOME and KDE

  • Automating Low Code App Deployment on Red Hat OpenShift with the Joget Operator

    This is a guest post by Julian Khoo, VP Product Development and Co-Founder at Joget Inc. Julian has almost 20 years of experience in the IT industry, specifically in enterprise software development. He has been involved in the development of various products and platforms in application development, workflow management, content management, collaboration and e-commerce.

  • Python Histogram Plotting: NumPy, Matplotlib, Pandas & Seaborn

    In this course, you’ll be equipped to make production-quality, presentation-ready Python histogram plots with a range of choices and features. If you have introductory to intermediate knowledge in Python and statistics, then you can use this article as a one-stop shop for building and plotting histograms in Python using libraries from its scientific stack, including NumPy, Matplotlib, Pandas, and Seaborn.

  • PyCon 2020 Conference Site is here!

    Our bold design includes the Roberto Clemente Bridge, also known as the Sixth Street Bridge, which spans the Allegheny River in downtown Pittsburgh. The Pittsburgh Steelmark, was originally created for United States Steel Corporation to promote the attributes of steel: yellow lightens your work; orange brightens your leisure; and blue widens your world. The PPG Building, is a complex in downtown Pittsburgh, consisting of six buildings within three city blocks and five and a half acres. Named for its anchor tenant, PPG Industries, who initiated the project for its headquarters, the buildings are all of matching glass design consisting of 19,750 pieces of glass. Also included in the design are a fun snake, terminal window, and hardware related items. [...] As with any sponsorship, the benefits go both ways. Organizations have many options for sponsorship packages, and they all benefit from exposure to an ever growing audience of Python programmers, from those just getting started to 20 year veterans and every walk of life in between. If you're hiring, the Job Fair puts your organization within reach of a few thousand dedicated people who came to PyCon looking to sharpen their skills.

  • PyCoder’s Weekly: Issue #382 (Aug. 20, 2019)
  • Python Qt5 - the QTimer class.
  • [LibreOffice GSoC] Week 12 Report

    It was The last week of GSoC program. Raal was working on testing all the project and the generated files and I help him by solving some bugs or add anything.

  • Sajeer Ahamed: Review | GSoC 2019

    I've been working on GStreamer based project of Gnome Foundation. GStreamer is a pipeline-based multimedia framework that links together a wide variety of media processing systems to complete complex workflows. The framework is based on plugins that will provide various codec and other functionality. The plugins can be linked and arranged in a pipeline. And most of the plugins are written in C. Now the developers are in an attempt to convert them to Rust which is more robust and easily maintainable. My task is to be a part of this conversion and to help fix issues related to this.

  • KDE's Onboarding Sprint: Making it easier to setup a development environment

    Suse were generous enough to offer two spacious and fully equipped offices at their headquarters to host the KDE sprints. We owe a special thanks and a big KDE hug to the OpenSuse team and in particular Douglas DeMaio and Fabian Vogt for being incredible hosts.

  • Third month progress

    I am here presenting you with my final month GSoC project report. I will be providing the links to my work at the end of the section. Final month of the work period was much more hectic and tiring than the first couple of months. I had been busy more than I had anticipated. Nonetheless, I had to write code which I enjoyed writing : ) . In the first half of this work period, I was focused on completing the left-over QDBus communication from the phase 2, which I did successfully. But as when I thought my task was all over, I was faced with some regression in the code, which I utilised my rest half a month to fix it. [...] As I had said above in the intro, I was faced with some real difficulty during the second half of the work period. As soon as I finished up QDBus thing, a regression was caused (Which I should have noticed before, my bad), helper was no longer started by the main application. I spent rest of the days brain-storming the issue but due to shortage of time, could not fix it. I plan to try fixing it in the next few days before GSoC ends(26th August), if I successfully do that, I will update the status here as well .

Games: Steam Play/Proton, GNU/Linux on Xbox, and UnderMine

  • CodeWeavers Reflects On The Wild Year Since Valve Introduced Steam Play / Proton

    This week marks one year since Valve rolled out their Proton beta for Steam Play to allow Windows games to gracefully run on Linux via this Wine downstream catered for Steam Linux gaming. It's been crazy since then with all of Valve's continued work on open-source graphics drivers, adding the likes of FAudio and D9VK to Proton, continuing to fund DXVK development for faster Direct3D-over-Vulkan, and many other infrastructure improvements and more to allow more Windows games to run on Linux and to do so well and speedy.

  • Turn your Xbox console into a home PC with this guide

    If you’ve ever wondered if you can turn your Xbox into a PC, you came to the right place. Because the Xbox console has the same hardware specifications as some older computer desktops, you will be able to convert it to a fully functioning PC. Unfortunately, you will not be able to install Windows on your console, but you can use the Linux operating system. In this article you will find out what items you’re going to need in order to make this happen, and also the steps you need to follow to accomplish this.

  • Action-adventure roguelike UnderMine now available in Early Access

    UnderMine from developer Thorium is an action-adventure roguelike with a bit of RPG tossed in, it's now in Early Access with Linux support. [...] Featuring some gameplay elements found in the likes of The Binding of Isaac, you proceed further down the UnderMine, going room to room digging for treasure and taking down enemies. There's also some RPG style rogue-lite progression involved too, as you're able to find powerful items and upgrades as you explore to prepare you for further runs.

GNU Scientific Library 2.6 released

Version 2.6 of the GNU Scientific Library (GSL) is now available. GSL provides a large collection of routines for numerical computing in C. This release introduces major performance improvements to common linear algebra matrix factorizations, as well as numerous new features and bug fixes. The full NEWS file entry is appended below. The file details for this release are: ftp://ftp.gnu.org/gnu/gsl/gsl-2.6.tar.gz ftp://ftp.gnu.org/gnu/gsl/gsl-2.6.tar.gz.sig The GSL project homepage is http://www.gnu.org/software/gsl/ GSL is free software distributed under the GNU General Public License. Thanks to everyone who reported bugs and contributed improvements. Patrick Alken Read more