Language Selection

English French German Italian Portuguese Spanish

OStatic

Syndicate content
OStatic
Updated: 5 hours 20 min ago

Linux Representing at CES

Wednesday 7th of January 2015 04:06:16 AM

Linux at CES tops our news coverage for today. In other news OpenSource.com says installing Linux from Scratch can help users learn "the building blocks" of Linux and Softpedia.com says users "are going crazy" for circle icons. Elsewhere Jack Germain spoke to The Document Foundation and Open Source Business Alliance about reaching the goal of universal open document standards.

Linux was present today the start of the Consumer Electronics Show. Softpedia.com is reporting that attendees have seen Ubuntu being used. Perhaps the most notable was the CEO of NVIDIA using Ubuntu to power his CES presentation. Softpedia.com said, "It's hard to get better endorsement for an operating system when the CEO of one of the biggest hardware companies on this planet is using your system in the most important expo of the year." 

Steven J. Vaughan-Nichols is covering CES as well and said today that Linux is not only dominating super and mobile computing, but it is also tops in home entertainment. He said this year's CES is all about the TV and the new ones are going 4K - and Linux. "What the vendors also aren't telling you, until you look into the fine print, is that behind all the display magic is one version or another of Linux." Also from Vaughan-Nichols is the news that Dell will again offer Ubuntu on its latest high end and business laptops.

And finally from CES is the news that Microchip Technology Inc. "has joined the Linux Foundation and Automotive Grade Linux to develop software for the connected car." According to the brief post the announcement was made today the first day of CES.  Speaking of the Linux Foundation, it today announced three other new members: "IIX Inc., Micron Technology, Inc., and Planisys."  CES 2015 is running from January 6 through January 9. 

In other news:

* The Long Slog to Level the Document Playing Field

* The building blocks of a distribution with Linux from Scratch

* Linux Users Are Going Crazy About Circle Icons

* My Linux Setup: Michal Papis, Open Source Developer

* Richard Koh: Open source has many doors but no locks

Related Activities

Related Software Related Blog Posts








Citrix, Apache and Others Still Committed to CloudStack

Tuesday 6th of January 2015 04:05:49 PM

In case you were wondering about recent reports of the demise of CloudStack, the folks at Citrix are remaining adamant that the cloud computing platform is healthy and in use at lots of notable organizations. In fact, at the recent CloudStack Collaboration Conference in Budapest, Autodesk, China Telecom, Dell, Walt Disney, and Huawei were all reported among active users of the platform. 

CloudStack doesn't get as much hype as OpenStack, but is advancing as an open source platform under Apache, and has a commercial arm overseen by Citrix.

Speakers at the conference in Budapest confirmed, as Citrix officials have said before, that CloudStack doesn't get the marketing and headlines that OpenStack gets, but were adamant that the cloud platform is flourishing.

TechWorld quoted Mark Hinkle, senior director of open source solutions at Citrix, on the platform: "CloudStack supports what can be described as a 'minimum viable cloud' [in terms of time to market]. There are multiple projects going on with OpenStack [via the OpenStack Foundation], while ours is more specific - cloud orchestration through a single interface."

Hinkle has also stressed that reports from a months ago about the demise of CloudStack are off base. In an online post, he writes:

"Even though we support and sponsor a great deal of development in Apache CloudStack we participate in a much larger cloud community...Unfortunately some of the pundits in our industry are speculating as part of a recent  reorganization at Citrix (and the departure of some of our former colleagues to pursue other opportunities) that this is a sign that we are abandoning our commitment to Apache CloudStack and the project would die. That’s probably because they don’t exactly understand how the Apache Software Foundation(ASF) works and how Citrix supports them."

"You can’t equate Citrix and Apache CloudStack. Even though many companies employ developers there is no company that can buy influence. A company can’t leave the project only individuals can choose to participate or not.  It’s unique compared to many other similar organizations so it’s no wonder they are confused. The fact of the matter is that Citrix will continue to support Apache CloudStack and will continue to collaborate with a growing community of developers and users." 

 Citrix has steadily maintained that while lots of enterprises are only considering OpenStack, CloudStack exists in many working deployments. So it's definitely not time to write CloudStack off.

We covered Apache's updates of the open source CloudStack platform last year.  "This latest version of CloudStack reflects months of hard work by our diverse developer community and brings even more features to help our service-provider and enterprise users enhance their cloud platforms," said Giles Sirett, member of the Apache CloudStack Project Management Committee, in a statement. "Apache CloudStack continues to grow in both deployments and developer community size, and is the platform of choice for thousands of organizations that need to build IaaS environments quickly and securely with a proven, production-grade, technology."

senior director of open source solutions at Citrix

 

Related Activities

Related Blog Posts








Blue Box Gets $4 Million in Private Cloud Funding, Mystery Investor

Tuesday 6th of January 2015 03:51:07 PM

Blue Box, which offers private cloud services based on OpenStack to big companies including Viacom has just announced a new round of $4 million in funding on top of the $10 million the company raised late last year. And, as Silicon Angle notes, some of the funding comes from an unnamed telco: "The latest $4 million leg saw an unnamed US telco come aboard as the second strategic investor in the startup, triggering speculation about which carrier would throw its weight behind an OpenStack hosting provider."

The new funding is yet another sign of the growing interest that telcos have in emerging cloud computing platforms, and OpenStack in particular.

Blue Box offers private cloud as a service functionality based on RackSpace OpenStack, and has some large customers. The company's business model is the opposite of the ones followed by public cloud providers such as Amazon Web Services and Microsoft Azure. BlueBox's customers want completely private cloud services.

  Named investors in Blue Box's latest round of funding include Voyager Capital, Founder Collective, and the Blue Box executive team. However, it's notable that an unnamed telco is interested in Blue Box. 

"There are good reasons for telcos to show interest," notes Silicon Angle. "The cloud operating system is the default provisioning engine in OpenDaylight, a controller that Cisco Systems Inc. and other industry bigwigs hope to position as the unifying standard for software-defined networks."

Cisco, Red Hat and many other companies have focused on telcos as a priority within the OpenStack ecosystem. Last year, news came from Red Hat that it is collaborating to drive Network Functions Virtualization (NFV) and telecommunications technology into OpenStack. Red Hat is collaborating with eNovance, a leader in the open source cloud computing market, to drive Network Functions Virtualization (NFV) and telecommunications features into OpenStack. And Telefonica has announced collaborations with Red Hat and Intel to create a virtual infrastructure management (VIM) platform based on open source software running on Intel-based servers.

Blue Box, it seems, may become a player to watch as the OpenStack ecosystem and telco interests converge.

Related Activities

Related Blog Posts








Hel-lo Makulu and Goodbye Zeven

Tuesday 6th of January 2015 04:16:50 AM

Today in Linux news, ZevenOS has decided to rest on Ubuntu's laurels. Jamie Watson says Makulu 7 Xfce is the most beautiful distro he's ever seen and Dedoimedo says Elementary 0.3 is "purrty." The Korora project released version 21 Beta and Derrik Diener highlights the top five Arch derivatives. And finally today, we have a year-end Linux recap and another Linus quote is ruffling feathers.

Our top story today is the bewildering post from the ZevenOS project saying their newest release should last users a while. Dubbed "Goodbye Edition," version 6.0 was released on the last day of 2014 with the addendum that it "will be the last ZevenOS Version for a long long time." Apparently the decision was reached because the Ubuntu 14.04 is an LTS release and will be supported for five years. In the small print a disclaimer warned prospective users that this release was rushed out, not tested well, and will probably have bugs. Yep, that's what you want to read when downloading your OS for the next five years.

Reviewer and blogger Jamie Watson recently tested Makulu Linux 7.0 Xfce saying it has "two major things going for it: first, it is based on Ubuntu 14.04 LTS, rather than Debian, and second, it uses the Xfce desktop." He wasn't able to convert the ISO for USB and there's no UEFI support, but said the installer is improved and the desktop is "gorgeous." After running down some of its little extras Watson said, "I strongly recommend giving this Makulu Linux release a try. At least boot up the Live image and see for yourself how it looks, and how it works on your system."

Linus Torvalds is starting the year off by turning over a new leaf. No, you know better than that. Torvalds recently upgraded to Fedora 21 and was a bit frustrated by the default terminal application. He posted a bit on his public Google+ page about his experiences upgrading saying it wasn't the smoothest upgrade ever. Once at the desktop he noted that 'gnome extensions don't work, since the gnome shell "versioning" is a joke.' But the observation that set off a comment frenzy read:

- the new gnome-terminal seems to default into a new "Emo mode" (aka "Dark Theme"). I don't know who thought it was a good idea to make a terminal application have its own depressed theme different from all other applications, but I'm guessing they spend their days cutting themselves and listening to death metal, and thinking they are "cool".

In other news:

* Top 5 Arch Linux Derivatives

* Big Year for Enterprise Linux Distros Includes Major Updates

* Linux in 2015: Distros on Tap

* Korora 21 (Darla) Beta - Now Available

* elementary OS 0.3 Freya beta review

Related Activities

Related Software Related Blog Posts








Red Hat Marks a Strong 2014, Sharpens Focus on OpenStack

Monday 5th of January 2015 04:16:12 PM

As 2014 ended, there were many eloquent summaries of the state of open source and the state of cloud computing, but one of the most focused ones came from Red Hat CEO Jim Whitehurst. In an online post that was fresh on the heels of a knockout financial quarter for Red Hat, Whitehurst lauded the fact that open source technology is now pervasive, and provided glimpses of how his company is gaining momentum with its cloud efforts.

“Today, it is almost impossible to name a major player in IT that has not embraced open source,” Whitehurst said. “Only a few short years ago, many would have argued we would never see that day.”

As a matter of fact, for several years Whitehurst has been predicting that open source would be used at every major enterprise and would exist at the component level in most software releases.

Whitehurst also took note of the fact that even Microsoft now claims to love open source and Linux.  Several media outlets toward the end of last year reported on Microsoft CEO Satya Nadella's comments on how he "loves Linux" and how he reportedly claims that 20 percent of Microsoft's Azure cloud is already Linux-based.

"Open source was initially adopted for low cost and lack of vendor lock-in, but customers have found that it also results in better innovation and more flexibility,” Whitehurst wrote. “Now it is pervasive, and it is challenging proprietary incumbents across technology categories. It is not only mainstream, open source is truly leading innovation in areas like cloud, mobile, big data, the Internet of Things, and beyond.”

Red Hat just marked its 11th consecutive quarter of mid-to-high teens revenue growth, and its stock jumped up in response. Red Hat is placing big bets on the OpenStack cloud platform, and this past quarter was the first one where OpenStack's impact on Red Hat's finances was really apparent. 

Red Hat is also in partnerhsip with Cisco to bring cloud solutions to enterprises, and we'll likely see cloud computing become as significant a business for Red Hat as its Linux segment has been.

Whitehurst added the following: "For years, we tackled questions: 'Is open source safe?' 'Is it secure?' 'Is it reliable?' Open source solutions are all of those things and more. They are widely embraced by the enterprise; Red Hat alone counts more than 90% of the Fortune 500 as our customers. Today, with virtually every major technology company adopting or embracing open source, I am not hearing those same questions at the CEO and CIO level."

Read more here: http://www.newsobserver.com/2014/12/26/4430284/red-hat-ceo-lauds-open-sources.html#storylink=cpy

 

Read more here: http://www.newsobserver.com/2014/12/26/4430284/red-hat-ceo-lauds-open-sources.html#storylink=cpy

 

Related Activities

Related Blog Posts








Tesora Trove DBaaS Certified for Mirantis OpenStack Cloud Platform

Monday 5th of January 2015 03:55:55 PM

At a steady clip, database-as-a-service functionality has been emerging as an important component of the evolution of the OpenStack cloud computing platform. When the OpenStack Icehouse version arrived in April of last year, the Trove database-as-a-service project was one of the under-the-hood offerings. And since then, OpenStack releases have featured significatnly improved versions of Trove.

Now, the first Mirantis-certified OpenStack Trove Database-as-a-Service offering is here. The certification provides assurance that the company’s DBaaS works with Mirantis OpenStack, an increasingly widely used version of the open source cloud platform.

 According to Tesora's announcement:

"Tesora certification and support includes the following databases: MongoDB, MySQL Community Edition, Percona Server, MariaDB, Redis and Cassandra. In addition to Mirantis OpenStack, Tesora certifies with Red Hat and Ubuntu. This ensures that the Trove-based Tesora DBaaS Platform installs, configures and operates properly with popular OpenStack distributions."

"Tesora adds the popular enterprise distribution from Mirantis to the Tesora OpenStack Database Certification Program ensuring compatibility with both NoSQL and SQL database management systems on Trove and eliminating the headache of enterprises having to test integration on their own."

"We're providing Mirantis customers with the most production-ready OpenStack Trove implementation, saving the work and time necessary to ensure compatibility," said Ken Rugg, CEO of Tesora, in a statement. "Working with Mirantis, we are realizing the potential of different technologies coming together as part of the OpenStack ecosystem, and giving enterprises the most complete and reliable OpenStack Trove available today."

"We are seeing great interest in OpenStack as an enterprise private cloud platform, and database as a service with Trove is important to the project," said Boris Renski, chief marketing officer at Mirantis. "Our partnership with Tesora gives our customers greater confidence when provisioning and managing a wide range of enterprise databases in their OpenStack clouds."

Many enterprises are seeking to leverage Trove as they put applications in the cloud. However, databases can be resource intensive, and many don't work well in cloud environments, so there is much scrutiny of the development of Trove for high-availability OpenStack deployments.  That's why certification and validation will remain important as Trove moves forward.

 

Related Activities

Related Software Related Blog Posts








Big Data Becomes a Market Force, Ushering in Change

Friday 2nd of January 2015 04:37:11 PM

As reported here earlier, a new KPGM study on cloud computing trends at enterprises shows that executives are very focused on extracting business metrics from their cloud computing and data analytics platforms. That suggests that we're going to continue to see the cloud and the Big Data trend evolve together this year.

Here are some of the biggest things to know about Big Data as we begin the new year.

Big Data is a Market Force.  In case you missed it, one of the successful IPOs of the last part of 2014 came from Hortonworks, which focuses on open source Big Data platform Hadoop. The success of the IPO drove home how focused many enterprises are on yielding more useful insights from their troves of data than standard data mining tools can provide. There will be other IPOs surrounding the Big Data trend, which is now a powerful market force.

Certification and Validation on the Rise. Companies are looking for employees with Big Data skills, and they are demanding certified and validated solutions. With that in mind, Hortonworks has extended its technology partner program with the addition of three new certifications it offers. The new enterprise components of the certification program, called HDP Operations Ready, HDP Security Ready and HDP Governance Ready, are targeted to enable organizations to adopt a modern data architecture with the Hortonworks Data Platform (HDP), supported by key enterprise Hadoop capabilities required of an enterprise data platform. Other players will follow with similar offerings.

Consideration is Becoming Adoption.  As is true with OpenStack and open cloud platforms, some enterprises are still just considering Big Data strategies, but that is changing. As Matt Asay notes: "If 2014 was the year that enterprises desperately tried to take off the Big Data training wheels, 2015 will be the year they succeed. Ironically, this won't be because they master the intricacies of Hadoop and Spark. Instead, it will be because 2015 will be the year we stop trying to make every data problem into a Hadoop problem and instead use the right tool for the job."

Forrester Research wrote last month that Hadoop is now "a must have for large organizations." And, indeed, large companies ranging from Yahoo to eBay make extensive use of the platform.

Big Data Training Will Expand.  Hortonworks now offers extensive training on Hadoop-centric topics, as we covered here, but Hadoop is not the only player in Big Data. Tools like Apache Drill and other solutions will yield useful training offerings. Analysts and developers can use Drill to interactively explore data in Hadoop or other NoSQL databases, such as HBase and MongoDB. Apache always does a good job of evolving training solutions alongside its platforms.

In enterprises as well as small businesses, the Big Data trend--sorting and sifting large data sets with new tools in pursuit of surfacing meaningful angles on stored information--is on the rise. We'll see this arena on the rise in 2015.

Related Activities

Related Blog Posts








Survey Reveals Cloud Computing Trends Coming from Enterprises

Friday 2nd of January 2015 04:12:56 PM

With 2014 gone, businesses are set to roll out their 2015 cloud computing playbooks, and a lot of them are going to be focused on emerging open cloud platforms. As is usually true as a new year begins, survey results are appearing that put some metrics on the cloud plans that are set to be put in place.

According to a report from WANTED Analytics: "There are 3.9 million jobs in the U.S. affiliated with cloud computing today with 384,478 in IT alone. The median salary for IT professionals with cloud computing experience is $90,950 and the median salary for positions that pay over $100,000 a year is $116,950." In addition, a new KPMG study, 2014 Cloud Survey Report: Elevating Business in the Cloud, shows that executives are rapidly changing how they think about the cloud.

The KPMG study is done annually and involves responses from C-Level executives. The good news is that 73% of them are seeing improved business performance after implementing cloud-based applications and strategies. And, notably, 35 percent of enterprises adopting cloud computing platforms are interested in business analytics.

That last finding implies that we could see more convergence between the cloud and Big Data tools such as Hadoop.

The KPMG study is available for download here.

The study includes a graphic that shows that driving cost efficiencies is top of mind at many enterprises considering cloud platforms:

 

 

 

 

 

 

 

 

 

 

 

 

IDG Enterprise also came out recently with results from a new survey it did involving 1,672 IT decision-makers, and they show that cloud adoption of all kinds continues apace.

The results showed:

"More than two-thirds (69%) of companies have already made cloud investments. The rest plan to do so within the next three years. Companies appear to be moving steadily: Respondents anticipate their cloud usage will expand, on average, by 38% in the next 18 months. At the end of 2015, companies expect to be operating an average of 53% of their IT environments in the cloud."

Red Hat and other companies have been adamant that hybrid clouds are the wave of the future, and IT managers in IDG Enterprise's survey showed allegiance to public, private and hybrid clouds. "On average, cloud deployments are split almost evenly between public (15%) and private (19%) implementations," the report notes. "Although companies intend to adopt public cloud at a somewhat faster pace than private cloud, private cloud models will continue to have the edge."

 

 

Related Activities

Related Blog Posts








New Language from MIT Streamlines Building SQL-Backed Web Applications

Wednesday 31st of December 2014 04:27:12 PM

There are countless developers and administrators who are creating and deploying online applications backed by SQL databases.

The problem is that creating and deploying them is not the easiest nut to crack due to the complexity of marrying HTML, JavaScript and other tools and components.

That's exactly the problem that Adam Chlipala, an Assistant Professor of Electrical Engineering and Computer Science at MIT, is trying to solve with Ur/Web, a domain-specific functional programming language for modern Web applications. The language encapsulates many key components needed for robust applications into just one language, and can help ensure the security of the applications. 

According to Chlipala:

"My research applies formal logic to improve the software development process. I spend a lot of time proving programs correct with the Coq computer proof assistant, with a focus on reducing the human cost of program verification so that we can imagine that it could one day become a standard part of software development (at least for systems software). I'm also interested in the design and implementation of programming languages, especially functional or otherwise declarative languages, especially when expressive type systems (particularly dependent type systems) are involved. I usually stick to very low-level or very high-level languages; I believe that most 'general-purpose languages' of today fail to hit the mark by being, for any particular software project, either too low-level or too high-level."

Chlipala also emphasizes that Ur/Web is not only a research prototype. It has a growing programmer community and some commercial application development underway. As an explanatory page notes:

"Ur/Web supports construction of dynamic web applications backed by SQL databases. The signature of the standard library is such that well-typed Ur/Web programs "don't go wrong" in a very broad sense. Not only do they not crash during particular page generations, but they also may not:

 

  • - Suffer from any kinds of code-injection attacks
  • - Return invalid HTML
  • - Contain dead intra-application links
  • - Have mismatches between HTML forms and the fields expected by their handlers
  •  - Attempt invalid SQL queries
  • - Use improper marshaling or unmarshaling in communication with SQL databases or between browsers and web servers

 

"This type safety is just the foundation of the Ur/Web methodology. It is also possible to use metaprogramming to build significant application pieces by analysis of type structure. For instance, the demo includes an ML-style functor for building an admin interface for an arbitrary SQL table. The type system guarantees that the admin interface sub-application that comes out will always be free of the above-listed bugs, no matter which well-typed table description is given as input."

"The Ur/Web compiler also produces very efficient object code that does not use garbage collection. These compiled programs will often be even more efficient than what most programmers would bother to write in C. For example, the standalone web server generated for the demo uses less RAM than the bash shell. The compiler also generates JavaScript versions of client-side code, with no need to write those parts of applications in a different language."

"The implementation of all this is open source."

You can get the latest distribution of Ur/Web here.

 

 

Related Activities

Related Blog Posts








Pear Returning, In the Movies, and More Highlights

Wednesday 31st of December 2014 04:21:40 AM

Today in Linux news Softpedia.com is reporting that Pear OS is making signs of a comeback. In other news, Debian is spotted in a new movie and Phil Shapiro shares a cheap laptop story. We have 2014 highlights on Ubuntu, GNOME, and FOSS in general as well as Jack Wallen's wishes for the new year.

Linux was spotted on the big screen again. This time in Sci-Fi thriller Lucy. Softpedia.com is reporting that you can see a Linux distribution clearly running Xfce in an important scene in the movie. They say it most likely looks like Debian and they've posted a video with those seconds of interest.

Speaking of Softpedia.com, they also today relayed the rumor that Pear OS may be making a comeback. Pear OS was a Linux distribution that looked disturbingly like Mac OS X and disappeared about a year ago. Well someone spotted a new screenshot as if to tease of a new Pear OS release. On this Softpedia said, "From what little it can be discerned from the image, it could just be the real deal. The quality of the desktop matches what we would expect from Pear OS, but all those watermarks are strange."

OMG!Ubuntu! today looked back at the year in Ubuntu with the big developments each month. Christine Hall at Foss Force looks at the five biggest FOSS stories of the year. Systemd and Devuan made her list. The GNOMEs posted their highlights of the year including the releases of 3.12 and 3.14. And finally, Jack Wallen shares his wish list for the new year including hopes that the Ubuntu Phone actually gets released.

In other news:

* Install Linux on a used laptop

* Fedora Does Real World Work. Debian is for Hobbyists

* Debian Project News - December 29th, 2014

* The Best Linux Software

* Who wrote systemd?

Related Activities

Related Software Related Blog Posts








Apache Marks Year's End By Graduating Two Big Data Projects

Tuesday 30th of December 2014 04:35:37 PM

As this year draws to a close, it's worth taking note of two important projects from the Apache Software Foundation (ASF) that have graduated to top-tier project status, ensuring them development resources and more. Apache  MetaModel went from the Apache Incubator to become a Top Level Project. It provides a model for interacting with data based on metadata, and developers can use it to go beyond just physical data layers to work with most any forms of data.

Meanwhile, we've also covered the news of Apache Drill graduating to Top Level Project status. Drill is billed as the world's first schema-free SQL query engine that delivers real-time insights by removing the constraint of building and maintaining schemas before data can be analyzed.

We ran an interview with Tomer Shiran (shown above), a member of the Drill Project Management Committee, to get his thoughts. He said:

"Analysts and developers can use Drill to interactively explore data in Hadoop and other NoSQL databases, such as HBase and MongoDB. There's no need to explicitly define and maintain schemas, as Drill can automatically leverage the structure that's embedded in the data."

"This enables self-service data exploration, which is not possible with traditional data warehouses or SQL-on-Hadoop solutions like Hive and Impala, in which DBAs must manage schemas and transform the data before it can be analyzed."

"Drill is the ideal interactive SQL engine for Hadoop. One of the main reasons organizations choose Hadoop is due to its flexibility and agility. Unlike traditional databases, getting data into Hadoop is easy, and users can load data in any shape or size on their own. Early attempts at SQL on Hadoop (eg, Hive, Impala) force schemas to be created and maintained even for self-describing data like JSON, Parquet and HBase tables."

"These systems also require data to be transformed before it can be queried. Drill is the only SQL engine for Hadoop that doesn't force schemas to be defined before data can be queried."

 According to eWeek, regarding MetaModel:

"Apache  MetaModel is a data access framework that provides a common interface  for the discovery, exploration, and querying of different types of data  sources. Unlike traditional mapping frameworks, MetaModel emphasizes  metadata of the data source itself and the ability to add more data  sources at runtime. MetaModel's schema model and SQL-like query API is  applicable to databases, CSV files, Excel spreadsheets, NoSQL databases,  Cloud-based business applications, and even regular Java objects. This  level of abstraction makes MetaModel great for dynamic data processing  applications, less so for applications modeled strictly around a  particular domain, ASF officials said."

 "MetaModel  enables you to consolidate code and consolidate data a lot quicker than  any other library out there," said Kasper Sorensen, vice president of  Apache MetaModel, in a statement. "In these 'big data days' there's a  lot of focus on performance and scalability, and surely these topics  also surround Apache MetaModel. The big data challenge is not always  about massive loads of data, but instead massive variation and feeding a  lot of different sources into a single application. Now to make such an  application you both need a lot of connectivity capabilities and a lot  of modeling flexibility. Those are the two aspects where Apache  MetaModel shines. We make it possible for you to build applications that  retain the complexity of your data – even if that complexity may change  over time. The trick to achieve this is to model on the metadata and  not on your assumptions."

 On the topic of what graduation to Top Level Project status means at Apache, Tomer Shiran said:

"Graduation is a decision made by the Apache Software Foundation (ASF) board, and it provides confidence to potential users and contributors that the project has a strong foundation. From a governance standpoint, a top-level project has its own board (also known as PMC). The PMC Chair (Jacques Nadeau) is a VP at Apache."

Related Activities

Related Software Related Blog Posts


Apache Markes Year's End By Graduating Two Big Data Projects

Tuesday 30th of December 2014 04:35:37 PM

As this year draws to a close, it's worth taking note of two important projects from the Apache Software Foundation (ASF) that have graduated to top-tier project status, ensuring them development resources and more. Apache  MetaModel went from the Apache Incubator to become a Top Level Project. It provides a model for interacting with data based on metadata, and developers can use it to go beyond just physical data layers to work with most any forms of data.

Meanwhile, we've also covered the news of Apache Drill graduating to Top Level Project status. Drill is billed as the world's first schema-free SQL query engine that delivers real-time insights by removing the constraint of building and maintaining schemas before data can be analyzed.

We ran an interview with Tomer Shiran (shown above), a member of the Drill Project Management Committee, to get his thoughts. He said:

"Analysts and developers can use Drill to interactively explore data in Hadoop and other NoSQL databases, such as HBase and MongoDB. There's no need to explicitly define and maintain schemas, as Drill can automatically leverage the structure that's embedded in the data."

"This enables self-service data exploration, which is not possible with traditional data warehouses or SQL-on-Hadoop solutions like Hive and Impala, in which DBAs must manage schemas and transform the data before it can be analyzed."

"Drill is the ideal interactive SQL engine for Hadoop. One of the main reasons organizations choose Hadoop is due to its flexibility and agility. Unlike traditional databases, getting data into Hadoop is easy, and users can load data in any shape or size on their own. Early attempts at SQL on Hadoop (eg, Hive, Impala) force schemas to be created and maintained even for self-describing data like JSON, Parquet and HBase tables."

"These systems also require data to be transformed before it can be queried. Drill is the only SQL engine for Hadoop that doesn't force schemas to be defined before data can be queried."

 According to eWeek, regarding MetaModel:

"Apache  MetaModel is a data access framework that provides a common interface  for the discovery, exploration, and querying of different types of data  sources. Unlike traditional mapping frameworks, MetaModel emphasizes  metadata of the data source itself and the ability to add more data  sources at runtime. MetaModel's schema model and SQL-like query API is  applicable to databases, CSV files, Excel spreadsheets, NoSQL databases,  Cloud-based business applications, and even regular Java objects. This  level of abstraction makes MetaModel great for dynamic data processing  applications, less so for applications modeled strictly around a  particular domain, ASF officials said."

 "MetaModel  enables you to consolidate code and consolidate data a lot quicker than  any other library out there," said Kasper Sorensen, vice president of  Apache MetaModel, in a statement. "In these 'big data days' there's a  lot of focus on performance and scalability, and surely these topics  also surround Apache MetaModel. The big data challenge is not always  about massive loads of data, but instead massive variation and feeding a  lot of different sources into a single application. Now to make such an  application you both need a lot of connectivity capabilities and a lot  of modeling flexibility. Those are the two aspects where Apache  MetaModel shines. We make it possible for you to build applications that  retain the complexity of your data – even if that complexity may change  over time. The trick to achieve this is to model on the metadata and  not on your assumptions."

 On the topic of what graduation to Top Level Project status means at Apache, Tomer Shiran said:

"Graduation is a decision made by the Apache Software Foundation (ASF) board, and it provides confidence to potential users and contributors that the project has a strong foundation. From a governance standpoint, a top-level project has its own board (also known as PMC). The PMC Chair (Jacques Nadeau) is a VP at Apache."

Related Activities

Related Software Related Blog Posts








Docker Reigned in 2014, But Competition is Coming

Tuesday 30th of December 2014 04:12:33 PM

Container technology was without a doubt one of the biggest stories of 2014, and if you mention the container arena to most people, Docker is what they think of. As impressive as Docker is, as recently as June of last year, OStatic highlighted some of its instabilities.

As 2014 ends, we are about to see the container space get a whole lot more complicated and competitive. Some big fish are swimming right next to Docker. Google has set its sights squarely on Docker by transforming its Kubernetes platform into a full-fledged part of Google Cloud Platform with Google Container Engine. Meanwhile Canonical is leaping into the into the virtualization arena with a new hypervisor called LXD  that uses the same Linux container tools that have allowed Docker to isolate instances from one another. And, I've reported on how Joyent has announced that it is open sourcing its core technology, which can compete with OpenStack and other cloud offerings, and facilitates efficient use of container technologies like Docker.

A few months ago, I covered the news that Google had released Kubernetes under an open-source license, which is essentially a version of Borg, designed to harness computing power from data centers into a powerful virtual machine. It can make a difference for many cloud computing deployments, and optimizes usage of container technology. You can find the source code for Kubernetes on GitHub

Following my initial report, news arrived that some vey big contributors to the Kubernetes project, including IBM, Microsoft, Red Hat, Docker, CoreOS, Mesosphere, and SaltStack are working in tandem on open source tools and container technologies that can run on multiple computers and networks. Now, Google has transformed Kubernetes int a full-fledged part of Google Cloud Platform with Google Container Engine

In a blog post, Brian Stevens, VP of Product Management, characterizes Google Container Engine as ideal for handling virtual machines:

"Google Container Engine lets you move from managing application components running on individual virtual machines to launching portable Docker containers that are scheduled into a managed compute cluster for you. Create and wire together container-based services, and gain common capabilities like logging, monitoring and health management with no additional effort. Based on the open source Kubernetes project and running on Google Compute Engine VMs, Container Engine is an optimized and efficient way to build your container-based applications."

 While Google is a big fish, lots of people are talking about Canonical's LXD project as well. As noted by Silicon Angle:

"Canonical Ltd. dropped a bombshell last week after revealing that its following fellow operating system vendors Red Hat Inc. and Microsoft Corp. into the virtualization market with a new hypervisor that promises to deliver the same experience as the competition faster and more efficiently. Dubbed LXD, the software relies on the same Linux containerization feature that provided the foundation for Docker to isolate instances from one another but adds integration with popular security utilities along with management and monitoring functionality."

Canonical, has recently launched a new “snappy” version of Ubuntu Core. This minimalist take on Ubuntu can especially serve Docker deployments and platform-as-a-service environments.

Also on the Linux competition front, we reported on how the CoreOS team is developing a Docker competitior dubbed Rocket. Rocket is a new container runtime, designed for composability, security, and speed, according to the CoreOS team. The group has released a prototype version on GitHub to begin getting community feedback.

According to a post on Rocket:

“When Docker was first introduced to us in early 2013, the idea of a “standard container” was striking and immediately attractive: a simple component, a composable unit, that could be used in a variety of systems. The Docker repository included a manifesto of what a standard container should be. This was a rally cry to the industry, and we quickly followed. We thought Docker would become a simple unit that we can all agree on.”

“Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform.”

“We still believe in the original premise of containers that Docker introduced, so we are doing something about it. Rocket is a command line tool, rkt, for running App Containers. An ‘App Container’ is the specification of an image format, container runtime, and a discovery mechanism.”

 Joyent has has also announced two new open source initiatives and the general availability of a container service in the Joyent Public Cloud to accelerate the adoption of application containers in the enterprise. Docker application containers are grabbing headlines everywhere and overhauling how data centers operate. Joyent maintains, though, that there remain limitations in the areas of security, virtual networking and persistence that present challenges for enterprises looking to deploy Docker in support of production applications. The open source initiatives Joyent is announcing, Linux Branded Zones (LXz) and the extension of Docker Engine to SmartDataCenter, are targeted to "deliver proven, multi-tenant security and bare metal performance to Linux applications running in Docker application containers."

Joyent maintains that with LXz, you can run Linux applications, including those running in Docker Containers, natively on secure OS virtualization without an intervening hardware hypervisor layer.

"Running Docker containers on legacy hardware hypervisor hosts, like VMware or Amazon EC2, means you give up the workload density and performance benefits associated with infrastructure containers," said Bill Fine, VP Products, Joyent. "LXz and Docker Engine for SmartDataCenter provide an infrastructure container runtime environment capable of delivering secure, bare metal performance to Docker-based applications in a multi-tenant environment." 

Docker application containers are grabbing headlines everywhere and overhauling how data centers operate. They will remain a big story in 2015, but Docker will also deal with competition. Many major public and private cloud providers advise enterprises to run Docker containers on top of legacy hardware hypervisors because of security concerns related to the default Linux infrastructure containers. They will look closely at technology that competes with Docker, and that will be a story to watch in 2015.

 

Related Activities

Related Software Related Blog Posts








Happy Birthday Linus, Looking Back, and Korora Tidbits

Tuesday 30th of December 2014 04:46:38 AM

Softpedia.com today remembered the birthday of our founding father Linus Torvalds. In other news some Korora tidbits popped up in the feeds and Matthias Clasen is hinting that Red Hat 7.2 may feature the latest GNOME 3.14. Phoronix.com highlights their top stories for the year in Fedora and Debian and Sean Michael Kerner looks back at the top kernel news of the year.

Happy Birthday Linus! Softpedia.com remembered Linus Torvalds' birthday this year remarking that he recently turned 45 years old. Torvalds, the writer and head maintainer of the Linux kernel, was born December 28, 1969. By releasing Linux in 1991, Torvalds changed the world at barely 21 years of age. "He's currently employed by the Linux Foundation, where he takes care of the most advanced branch of the Linux kernel. He's the maintainer for it and he's basically the front man for the Linux kernel and a very important public figure." Softpedia.com remembers and has linked to a foundation video on Linus.

Reflection is common at this time of the year and the tech industry headlines are overflowing. Down in our holler, Phoronix.com looks back at the year in Fedora and Debian by highlighting their most popular posts. For Fedora the year included the release of Fedora 21 and their software management developments. Systemd and the GNOME desktop were among the hottest topics in Debianland according to Phoronix. Sean Michael Kerner briefly highlights some of the major developments in the kernel space this past year and Katherine Noyes speaks to Linux bloggers on their top moments in FOSS this year.

In two brief posts this weekend, the Korora project announced that the development of Korora 21 "is currently in full progress and the images are coming along well." They didn't really elaborate much more on that other than to say they're working hard on the upcoming beta. In a second drive-by post, the project announced Korora 19 end of life. Korora 19 and 19.1 will join the ranks of unsupported on January 6, 2015. Users are urged to upgrade.

In other news:

* User Review of CentOS 7.0 GNOME

* RHEL 7.2 may switch to the latest GNOME 3.14

* Improving on bug reports by Bruce Byfield

Related Activities

Related Software Related Blog Posts








More in Tux Machines

Nouveau In Linux 3.20 Will Have A Lot Of Code Cleaning

While the Nouveau pull request has yet to be issued for the DRM-Next merge window that will ultimately target the Linux 3.20 kernel, a look at the changes so far appear to mostly indicate this open-source NVIDIA driver is just going through a period of code cleaning and reorganization. Read more Also: Linux kernels for a macbook pro retina

Android Leftovers

Debian 8.0 "Jessie" Installer RC1 Released

The first release candidate for the Debian Jessie Installer in leading up to the Debian 8.0 "Jessie" release. While some Debian developers were hoping to release Debian 8.0 before February, it doesn't look like that will pan out given that the first release candidate of the installer surfaced today. Read more Also: Debian 8.0 "Jessie" RC1 Is Here, Test Away

Firefox 35.0.1 Now Out – My God, It's Full of Fixes

Two weeks after the release of Firefox 35, the Mozilla devs have pushed the first update out the door and they have fixed a number of important crashes and various other problems. Read more