Slashdot

Syndicate content Slashdot: Linux
News for nerds, stuff that matters
Updated: 5 min 14 sec ago

Google, Linaro Develop Custom Android Edition For Project Ara

Thursday 31st of July 2014 01:24:00 AM
rtoz writes with this excerpt from an IDG story about the creation of an Android fork made just for Google's modular cell-phone project : A special edition of Android had to be created for the unique customizable design of Project Ara, said George Grey, CEO of Linaro. ... Android can already plug and play SD cards. But Grey said additional OS functionality is needed for storage, cameras and other modules that are typically inside smartphones, but can now be externally added to Project Ara. A lot of work is also being done on UniPro transport drivers, which connect modules and components in Project Ara. UniPro protocol drivers in Android will function much like the USB protocol, where modules will be recognized based on different driver "classes," such as those for networking, sensor, imaging, input and others. Some attachable parts may not be recognized by Android. For those parts, separate drivers need to be developed by module makers through emulators. "That will be need to be done in a secure system so the device can't do damage to the system," Grey said. Project Ara is a very disruptive concept, and it turns around conventional thinking on how to build phones, Grey said.

Read more of this story at Slashdot.








Reglue: Opening Up the World To Deserving Kids With Linux Computers

Wednesday 30th of July 2014 09:17:00 AM
jrepin writes: Today, a child without access to a computer (and the Internet) at home is at a disadvantage before he or she ever sets foot in a classroom. The unfortunate reality is that in an age where computer skills are no longer optional, far too many families don't possess the resources to have a computer at home. Linux Journal recently had the opportunity to talk with Ken Starks about his organization, Reglue (Recycled Electronics and Gnu/Linux Used for Education) and its efforts to bridge this digital divide.

Read more of this story at Slashdot.








Reglue: Opening Up the World To Deserving Kids With Linux Computers

Wednesday 30th of July 2014 09:17:00 AM
jrepin writes: Today, a child without access to a computer (and the Internet) at home is at a disadvantage before he or she ever sets foot in a classroom. The unfortunate reality is that in an age where computer skills are no longer optional, far too many families don't possess the resources to have a computer at home. Linux Journal recently had the opportunity to talk with Ken Starks about his organization, Reglue (Recycled Electronics and Gnu/Linux Used for Education) and its efforts to bridge this digital divide.

Read more of this story at Slashdot.








Valencia Linux School Distro Saves 36 Million Euro

Sunday 27th of July 2014 10:04:00 PM
jrepin (667425) writes "The government of the autonomous region of Valencia (Spain) earlier this month made available the next version of Lliurex, a customisation of the Edubuntu Linux distribution. The distro is used on over 110,000 PCs in schools in the Valencia region, saving some 36 million euro over the past nine years, the government says." I'd lke to see more efforts like this in the U.S.; if mega school districts are paying for computers, I'd rather they at least support open source development as a consequence.

Read more of this story at Slashdot.








A Router-Based Dev Board That Isn't a Router

Sunday 27th of July 2014 09:10:00 PM
An anonymous reader writes with a link to an intriguing device highlighted at Hackaday (it's an Indiegogo project, too, if it excites you $90 worth, and seems well on its way to meeting its modest goal): The DPT Board is something that may be of interest to anyone looking to hack up a router for their own connected project or IoT implementation: hardware based on a fairly standard router, loaded up with OpenWRT, with a ton of I/O to connect to anything. It's called the DPT Board, and it's basically an hugely improved version of the off-the-shelf routers you can pick up through the usual channels. On board are 20 GPIOs, USB host, 16MB Flash, 64MB RAM, two Ethernet ports, on-board 802.11n and a USB host port. This small system on board is pre-installed with OpenWRT, making it relatively easy to connect this small router-like device to LED strips, sensors, or whatever other project you have in mind.

Read more of this story at Slashdot.








Linus Torvalds: "GCC 4.9.0 Seems To Be Terminally Broken"

Sunday 27th of July 2014 06:40:00 PM
hypnosec (2231454) writes to point out a pointed critique from Linus Torvalds of GCC 4.9.0. after a random panic was discovered in a load balance function in Linux 3.16-rc6. in an email to the Linux kernel mailing list outlining two separate but possibly related bugs, Linus describes the compiler as "terminally broken," and worse ("pure and utter sh*t," only with no asterisk). A slice: "Lookie here, your compiler does some absolutely insane things with the spilling, including spilling a *constant*. For chrissake, that compiler shouldn't have been allowed to graduate from kindergarten. We're talking "sloth that was dropped on the head as a baby" level retardation levels here .... Anyway, this is not a kernel bug. This is your compiler creating completely broken code. We may need to add a warning to make sure nobody compiles with gcc-4.9.0, and the Debian people should probably downgrate their shiny new compiler."

Read more of this story at Slashdot.








GOG.com Announces Linux Support

Thursday 24th of July 2014 03:38:00 PM
For years, Good Old Games has made a business out of selling classic PC game titles completely free of DRM. Today they announced that their platform now supports Linux. They said, We've put much time and effort into this project and now we've found ourselves with over 50 titles, classic and new, prepared for distribution, site infrastructure ready, support team trained and standing by ... We're still aiming to have at least 100 Linux games in the coming months, but we've decided not to delay the launch just for the sake of having a nice-looking number to show off to the press. ... Note that we've got many classic titles coming officially to Linux for the very first time, thanks to the custom builds prepared by our dedicated team of penguin tamers. ... For both native Linux versions, as well as special builds prepared by our team, GOG.com will provide distro-independent tar.gz archives and support convenient DEB installers for the two most popular Linux distributions: Ubuntu and Mint, in their current and future LTS editions.

Read more of this story at Slashdot.








Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tuesday 22nd of July 2014 05:40:00 PM
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tuesday 22nd of July 2014 05:40:00 PM
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tuesday 22nd of July 2014 05:40:00 PM
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tuesday 22nd of July 2014 05:40:00 PM
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tuesday 22nd of July 2014 05:40:00 PM
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tuesday 22nd of July 2014 05:40:00 PM
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

Tuesday 22nd of July 2014 05:40:00 PM
New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Read more of this story at Slashdot.








Exodus Intelligence Details Zero-Day Vulnerabilities In Tails OS

Tuesday 22nd of July 2014 04:10:00 PM
New submitter I Ate A Candle (3762149) writes Tails OS, the Tor-reliant privacy-focused operating system made famous by Edward Snowden, contains a number of zero-day vulnerabilities that could be used to take control of the OS and execute code remotely. At least that's according to zero-day exploit seller Exodus Intelligence, which counts DARPA amongst its customer base. The company plans to tell the Tails team about the issues "in due time", said Aaron Portnoy, co-founder and vice president of Exodus, but it isn't giving any information on a disclosure timeline. This means users of Tails are in danger of being de-anonymised. Even version 1.1, which hit public release today (22 July 2014), is affected. Snowden famously used Tails to manage the NSA files. The OS can be held on a USB stick and leaves no trace once removed from the drive. It uses the Tor network to avoid identification of the user, but such protections may be undone by the zero-day exploits Exodus holds.

Read more of this story at Slashdot.








Exodus Intelligence Details Zero-Day Vulnerabilities In Tails OS

Tuesday 22nd of July 2014 04:10:00 PM
New submitter I Ate A Candle (3762149) writes Tails OS, the Tor-reliant privacy-focused operating system made famous by Edward Snowden, contains a number of zero-day vulnerabilities that could be used to take control of the OS and execute code remotely. At least that's according to zero-day exploit seller Exodus Intelligence, which counts DARPA amongst its customer base. The company plans to tell the Tails team about the issues "in due time", said Aaron Portnoy, co-founder and vice president of Exodus, but it isn't giving any information on a disclosure timeline. This means users of Tails are in danger of being de-anonymised. Even version 1.1, which hit public release today (22 July 2014), is affected. Snowden famously used Tails to manage the NSA files. The OS can be held on a USB stick and leaves no trace once removed from the drive. It uses the Tor network to avoid identification of the user, but such protections may be undone by the zero-day exploits Exodus holds.

Read more of this story at Slashdot.








Exodus Intelligence Details Zero-Day Vulnerabilities In Tails OS

Tuesday 22nd of July 2014 04:10:00 PM
New submitter I Ate A Candle (3762149) writes Tails OS, the Tor-reliant privacy-focused operating system made famous by Edward Snowden, contains a number of zero-day vulnerabilities that could be used to take control of the OS and execute code remotely. At least that's according to zero-day exploit seller Exodus Intelligence, which counts DARPA amongst its customer base. The company plans to tell the Tails team about the issues "in due time", said Aaron Portnoy, co-founder and vice president of Exodus, but it isn't giving any information on a disclosure timeline. This means users of Tails are in danger of being de-anonymised. Even version 1.1, which hit public release today (22 July 2014), is affected. Snowden famously used Tails to manage the NSA files. The OS can be held on a USB stick and leaves no trace once removed from the drive. It uses the Tor network to avoid identification of the user, but such protections may be undone by the zero-day exploits Exodus holds.

Read more of this story at Slashdot.








Exodus Intelligence Details Zero-Day Vulnerabilities In Tails OS

Tuesday 22nd of July 2014 04:10:00 PM
New submitter I Ate A Candle (3762149) writes Tails OS, the Tor-reliant privacy-focused operating system made famous by Edward Snowden, contains a number of zero-day vulnerabilities that could be used to take control of the OS and execute code remotely. At least that's according to zero-day exploit seller Exodus Intelligence, which counts DARPA amongst its customer base. The company plans to tell the Tails team about the issues "in due time", said Aaron Portnoy, co-founder and vice president of Exodus, but it isn't giving any information on a disclosure timeline. This means users of Tails are in danger of being de-anonymised. Even version 1.1, which hit public release today (22 July 2014), is affected. Snowden famously used Tails to manage the NSA files. The OS can be held on a USB stick and leaves no trace once removed from the drive. It uses the Tor network to avoid identification of the user, but such protections may be undone by the zero-day exploits Exodus holds.

Read more of this story at Slashdot.








Exodus Intelligence Details Zero-Day Vulnerabilities In Tails OS

Tuesday 22nd of July 2014 04:10:00 PM
New submitter I Ate A Candle (3762149) writes Tails OS, the Tor-reliant privacy-focused operating system made famous by Edward Snowden, contains a number of zero-day vulnerabilities that could be used to take control of the OS and execute code remotely. At least that's according to zero-day exploit seller Exodus Intelligence, which counts DARPA amongst its customer base. The company plans to tell the Tails team about the issues "in due time", said Aaron Portnoy, co-founder and vice president of Exodus, but it isn't giving any information on a disclosure timeline. This means users of Tails are in danger of being de-anonymised. Even version 1.1, which hit public release today (22 July 2014), is affected. Snowden famously used Tails to manage the NSA files. The OS can be held on a USB stick and leaves no trace once removed from the drive. It uses the Tor network to avoid identification of the user, but such protections may be undone by the zero-day exploits Exodus holds.

Read more of this story at Slashdot.








Exodus Intelligence Details Zero-Day Vulnerabilities In Tails OS

Tuesday 22nd of July 2014 04:10:00 PM
New submitter I Ate A Candle (3762149) writes Tails OS, the Tor-reliant privacy-focused operating system made famous by Edward Snowden, contains a number of zero-day vulnerabilities that could be used to take control of the OS and execute code remotely. At least that's according to zero-day exploit seller Exodus Intelligence, which counts DARPA amongst its customer base. The company plans to tell the Tails team about the issues "in due time", said Aaron Portnoy, co-founder and vice president of Exodus, but it isn't giving any information on a disclosure timeline. This means users of Tails are in danger of being de-anonymised. Even version 1.1, which hit public release today (22 July 2014), is affected. Snowden famously used Tails to manage the NSA files. The OS can be held on a USB stick and leaves no trace once removed from the drive. It uses the Tor network to avoid identification of the user, but such protections may be undone by the zero-day exploits Exodus holds.

Read more of this story at Slashdot.