December 09, 2016

KDE Plasma 5.8 is designated an LTS edition with bugfixes and new releases being made for 18 months (rather than the normal four months).  This will please a category of user who don’t want new features on their desktop but do want it to keep working and bugs to be removed.  Because Neon aims to service Plasma and its users in every way we have now created the KDE neon User LTS Edition.

This comes with Plasma 5.8 LTS, updated for new bug fix releases (e.g. 5.8.5 is out at the end of this month) and will not change to Plasma 5.9 when they becomes available.  A common critisism of LTS editions is that it just means users get old versions with known bugs.  KDE neon User LTS Edition comes with the latest KDE Applications and it comes with the latest KDE Frameworks release and Qt 5.7, so all the KDE software we ship is the latest stable version.  Along with other KDE neon editions we’ll also ship the HWE updates for Linux and Mesa when they become available.

For those interested in archive details it’s

deb http://archive.neon.kde.org/user/lts xenial main

Switching from User Edition to User LTS Edition archive is unsupported but will likely work.


KDE Neon is so stable I completely forgot I was using it.

A recent Reddit post gave some pleasing feedback about KDE neon, allow me the indulgence of picking some pleasing quotes from it:

I feel like the KDE neon team has done such a great job with an out-of-the-box experience with this distro that it feels insanely polished.

Jep, I’m even using KDE neon at work. I’ve been able to simply focus on my tasks, and not worry about troubleshooting the OS.

KDE neon cured my distro hopping as well.

KDE neon is the bee’s knees.

Anyone else feel this last should become an official marketing slogan?

Facebooktwittergoogle_pluslinkedinby feather
on December 09, 2016 05:04 PM

Hello world!

David Wonderly

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

on December 09, 2016 04:29 PM

webinar-2016-series

With it nearing the end of the year we thought we’d take this opportunity to recap on all the ‘IoT builders’ webinar series we’ve hosted in 2016. The series looks at those making a difference in the world of IoT today, sharing their stories, insights and practical advice! Check out the list below:

Introduction to Ubuntu Core, an Ubuntu for IoT
Speaker: Manik Taneja (Product Manager for Ubuntu Core)

Why the world’s largest digital signage networks use Linux
Dave Haynes (Sixteen:Nine) and Jody Smith (BroadSign)

The Making of the Nextcloud Box: Building a consumer device in just a few months
Frank Karlitschek, Founder of Nextcloud

Industry 4.0 & IoT: the convergence of information and operational technology
Jimmy Garcia-Meza co-founder and CEO of CloudPlugs

Digital Signage meets IoT: Data-triggered content and 4G base stations
Sixteen:Nine editor Dave Haynes

Digital Signage Meets IoT: building success with a Raspberry Pi!
Sixteen:Nine editor Dave Haynes and Viktor Petersson, CEO of Screenly

And don’t miss our webinar happening on Tuesday 13th December, ‘2017 – what’s in store for IoT’ at 5pm (GMT) here. If you miss it, you can still watch it on the same link after!

on December 09, 2016 04:25 PM

All Linux distributions are constantly updating the versions of the packages in their archives. That’s what makes them great, lots of people working in a distributed way to let you easily update your software and get the latest features or critical bug fixes.

And you should constatly update your operating system. Otherwise you’ll become an easy target for criminals exploiting known vulnerabilities.

The problem, at least for me, is that I have many many Ubuntu machines in the house and my badwidth is really bad. So keeping all my real machines, virtual machines and various devices up-to-date every day has become a slow problem.

The solution is to cache the downloaded deb packages. So only one machine has to make the downloads from the internet, and they will be kept in my local network to make much faster to get the packages in the other machines.

So let me introduce you to Apt-Cacher NG.

Setting it up is simple. First, choose a machine to run the cacher and store the packages. Ideally, this machine should be running all the time, and should have a good amount of storage space. I’m using my desktop as the cacher; but as soon as I update my router to one that runs Ubuntu, I will make that one the cacher.

On that machine, install apt-cacher-ng:

  sudo apt install apt-cacher-ng

And that’s it. The cacher is installed and configured. Now we need the name of this machine to use it on the other ones:

  $ hostname
  calchas

In this example, calchas is the name of the machine I’m using as the cacher. Take note of the name of your machine, and now, in all the other machines:

  $ sudo gedit /etc/apt/apt.conf.d/02proxy

That will create a new empty configuration file for apt, and open it to be edited with gedit, the default graphical editor in Ubuntu. In the editor, write this:

  Acquire::http::proxy "http://calchas.lan:3142";

replacing calchas with the name of your cacher machine, collected above. The .lan part is really only needed when you are setting it up in a virtual machine and the host is the same as the cacher, but it doesn’t hurt to add it on real machines. That number, 3142, is the network port where the caching service is running, leave it unchanged.

After that, the first time you update a package in your network it will be slow just as before. But all the other machines updating the same package will be very fast. I have to thank apt-cacher-ng for saving me many hours during my updates of the past years.

on December 09, 2016 01:43 AM

December 08, 2016

christmas-bells-1

Ding ding ding! It’s christmas time, and as part of the festive competition we’re hosting that asks you to build a seasonal snap on your RaspberryPi…we couldn’t help but try out another example of our own!

Didier from the dev team has created a Christmas music carousel snap! The snap allows you to play a Christmas music carousel from a selection of pre-selected music or selecting your own midi music! They will play in random orders and loop through them!

On a 16.04 Ubuntu desktop, you can install this as a snap:

snap install christmas-music-carousel --beta --devmode

Then, run it with:

sudo christmas-music-carousel

and let the music play! Note that you can specify here a list of your favorite midi musics!

Here is a short video of the snap in action:

But that’s not it! The really cool part is if you bring a Raspberry PiGlow to the table, connected on the same network than your laptop!

piglow01

The LED will light up in sync with the music carousel played on your laptop, without any configuration, no cable between the laptop (playing music) and the Raspberry Pi (lighting up the LEDs) *christmas magic* we said!

Here is a video of this in action:

To get that working, on your Raspberry PiGlow with Ubuntu Core installed on it with a PiGlow. Install the grpc-piglow snap on it:

snap install grpc-piglow --beta --devmode

Then, run the christmas music carousel binary on your laptop just as before. Note that you can use –brightness to adjust remotely the brightness of the LEDs.

Happy christmas, and feel free to use this as an inspiration to submit your christmas snaps to our snap competition with great prizes!

Technical info and source code on Github for christmas music carousel and on gRPC PiGlow project.

on December 08, 2016 04:34 PM

S09E41 – Pine In The Neck - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Season Nine Episode Forty-One of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joe Ressington are connected and speaking to your brain.

We are four once more, thanks to some help from our mate Joe!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on December 08, 2016 03:00 PM
A few weeks ago, I traveled to Bucharest, Romania for a busy week of work, planning the Ubuntu 17.04 (Zesty) cycle.

I did have a Saturday and Sunday to myself, which I spent mostly walking around the beautiful, old city. After visiting the Romanian Athenaeum, I quite randomly stumbled into one truly unique experience. I passed a window shop for "Sir Ludovic Master Suit Maker" which somehow caught my eye.



I travel quite a bit on business, and you'll typically find me wearing a casual sports coat, a button up shirt, nice jeans, cowboy boots, and sometimes cuff links. But occasionally, I feel a little under-dressed, especially in New York City, where a dashing suit still rules the room.

Frankly, everything I know about style and fashion I learned from Derek Zoolander. Just kidding. Mostly.

Anyway, I owned two suits. One that I bought in 2004, for that post-college streak of weddings, and a seersucker suit (which is dandy in New Orleans and Austin, but a bit irreverent for serious client meetings on Wall Street).

So I stepped into Sir Ludovic, merely as a curiosity, and walked out with the most rewarding experience of my week in Romania. Augustin Ladar, the master tailor and proprietor of the shop, greeted me at the door. We then spent the better part of 3 hours, selecting every detail, from the fabrics, to the buttons, to the stylistic differences in the cut and the fit.




Better yet, I absorbed a wealth of knowledge on style and fashion: when to wear blue and when to wear grey, why some people wear pin stripes and others wear checks, authoritative versus friendly style, European versus American versus Asian cuts, what the heck herringbone is, how to tell if the other guy is also wearing hand tailored attire, and so on...

Augustin measured me for two custom tailored suits and two bespoke shirts, on a Saturday. I picked them up 6 days later on a Friday afternoon (paying a rush service fee).

Wow. Simply, wow. Splendid Italian wool fabric, superb buttons, eye-catching color shifting inner linings, and an impeccably precise fit.









I'm headed to New York for my third trip since, and I've never felt more comfortable and confident in these graceful, classy suits. A belated thanks to Augustin. Fabulous work!



Cheers,
Dustin
on December 08, 2016 02:33 PM

interdisciplinary-learningThere is an empty chair at the conference table of business professionals, a not assigned place that increasingly demands for the presence of a new type of integration manager. The demands for an ever-increasing specialization, imposed by the modern world, are bringing out with great emphasis the need for an interdisciplinary professional who understands the demands of specialists and who is able to coordinate and to link actions and decisions. This need, often still ignored, is a direct result of the growing complexity of the modern world and the fast communications inside the network.

Complexity” is undoubtedly the most suitable paradigm to characterize the historical and social model of today’s world, in which the interactions and connections between the various areas now form an inextricable network of relations. Since the ’60s and’ 70s a large group of scholars – including the chemist Ilya Prigogine and the physicist Murray Gell-Mann – began to study what would become a true Science of Complexity.

Yet this is not an entirely new concept: the term means “composed of several parts connected to each other and dependent on each other“, exactly as reality, nature, society, and the environment around us. A “complex” mode of thought integrates and considers all contexts, interconnections, interrelationships between the different realities as part of the vision.

What is professionalism? And who are professionals? What can define a professional? <…>

<Read More…[by Fabio Marzocca]>

on December 08, 2016 02:02 PM

From Linux kernel livepatches to encryption to ASLR to compiler optimizations and configuration hardening, we strive to ensure that Ubuntu 16.04 LTS is the most secure Linux distribution out of the box.

These slides try to briefly explain:

  • what we do to secure Ubuntu
  • how the underlying technology works
  • when the features took effect in Ubuntu

I hope you find this slide deck informative and useful!  The information herein is largely collected from the Ubuntu Security Features wiki page, where you can always find up to date information.



Cheers,
Dustin
on December 08, 2016 01:28 PM

UbuCon Europe 2016

Nathan Haines

UbuCon Europe 2016

Nathan Haines enjoying UbuCon Europe

If there is one defining aspect of Ubuntu, it's community. All around the world, community members and LoCo teams get together not just to work on Ubuntu, but also to teach, learn, and celebrate it. UbuCon Summit at SCALE was a great example of an event that was supported by the California LoCo Team, Canonical, and community members worldwide coming together to make an event that could host presentations on the newest developer technologies in Ubuntu, community discussion roundtables, and a keynote by Mark Shuttleworth, who answered audience questions thoughtfully, but also hung around in the hallway and made himself accessible to chat with UbuCon attendees.

Thanks to the Ubuntu Community Reimbursement Fund, the UbuCon Germany and UbuCon Paris coordinators were able to attend UbuCon Summit at SCALE, and we were able to compare notes, so to speak, as they prepared to expand by hosting the first UbuCon Europe in Germany this year. Thanks to the community fund, I also had the immense pleasure of attending UbuCon Europe. After I arrived, Sujeevan Vijayakumaran picked me up from the airport and we took the train to Essen, where we walked around the newly-opened Weihnachtsmarkt along with Philip Ballew and Elizabeth Joseph from Ubuntu California. I acted as official menu translator, so there were no missed opportunities for bratwurst, currywurst, glühwein, or beer. Happily fed, we called it a night and got plenty of sleep so that we would last the entire weekend long.

Zeche Zollverein, a UNESCO World Heritage site

UbuCon Europe was a marvelous experience. Friday started things off with social events so everyone could mingle and find shared interests. About 25 people attended the Zeche Zollverein Coal Mine Industrial Complex for a guided tour of the last operating coal extraction and processing site in the Ruhr region and was a fascinating look at the defining industry of the Ruhr region for a century. After that, about 60 people joined in a special dinner at Unperfekthaus, a unique location that is part creative studio, part art gallery, part restaurant, and all experience. With a buffet and large soda fountains and hot coffee/chocolate machine, dinner was another chance to mingle as we took over a dining room and pushed all the tables together in a snaking chain. It was there that some Portuguese attendees first recognized me as the default voice for uNav, which was something I had to get used to over the weekend. There's nothing like a good dinner to get people comfortable together, and the Telegram channel that was established for UbuCon Europe attendees was spread around.

Sujeevan Vijayakumaran addressing the UbuCon Europe attendees

The main event began bright and early on Saturday. Attendees were registered on the fifth floor of Unpefekthaus and received their swag bags full of cool stuff from the event sponsors. After some brief opening statements from Sujeevan, Marcus Gripsgård announced an exciting new Kickstarter campaign that will bring an easier convergence experience to not just most Ubuntu phones, but many Android phones as well. Then, Jane Silber, the CEO of Canonical, gave a keynote that went into detail about where Canonical sees Ubuntu in the future, how convergence and snaps will factor into future plans, and why Canonical wants to see one single Ubuntu on the cloud, server, desktop, laptop, tablet, phone, and Internet of Things. Afterward, she spent some time answering questions from the community, and she impressed me with her willingness to answer questions directly. Later on, she was chatting with a handful of people and it was great to see the consideration and thought she gave to those answers as well. Luckily, she also had a little time to just relax and enjoy herself without the third degree before she had to leave later that day. I was happy to have a couple minutes to chat with her.

Nathan Haines and Jane Silber

Microsoft Deutschland GmbH sent Malte Lantin to talk about Bash on Ubuntu on Windows and how the Windows Subsystem for Linux works, and while jokes about Microsoft and Windows were common all weekend, everyone kept their sense of humor and the community showed the usual respect that’s made Ubuntu so wonderful. While being able to run Ubuntu software natively on Windows makes many nervous, it also excites others. One thing is for sure: it’s convenient, and the prospect of having a robust terminal emulator built right in to Windows seemed to be something everyone could appreciate.

After that, I ate lunch and gave my talk, Advocacy for Advocates, where I gave advice on how to effectively recommend Ubuntu and other Free Software to people who aren’t currently using it or aren’t familiar with the concept. It was well-attended and I got good feedback. I also had a chance to speak in German for a minute, as the ambiguity of the term Free Software in English disappears in German, where freies Software is clear and not confused with kostenloses Software. It’s a talk I’ve given before and will definitely give again in the future.

After the talks were over, there was a raffle and then a UbuCon quiz show where the audience could win prizes. I gave away signed copies of my book, Beginning Ubuntu for Windows and Mac Users, in the raffle, and in fact I won a “xenial xeres” USB drive that looks like an origami squirrel as well as a Microsoft t-shirt. Afterwards was a dinner that was not only delicious with apple crumble for desert, but also free beer and wine, which rarely detracts from any meal.

Marcos Costales and Nathan Haines before the uNav presentation

Sunday was also full of great talks. I loved Marcos Costales’s talk on uNav, and as the video shows, I was inspired to jump up as the talk was about to begin and improvise the uNav-like announcement “You have arrived at the presentation.” With the crowd warmed up from the joke, Marcos took us on a fascinating journey of the evolution of uNav and finished with tips and tricks for using it effectively. I also appreciated Olivier Paroz’s talk about Nextcloud and its goals, as I run my own Nextcloud server. I was sure to be at the UbuCon Europe feedback and planning roundtable and was happy to hear that next year UbuCon Europe will be held in Paris. I’ll have to brush up on my restaurant French before then!

Nathan Haines contemplating tools with a Neanderthal

That was the end of UbuCon, but I hadn’t been to Germany in over 13 years so it wasn’t the end of my trip! Sujeevan was kind enough to put up with me for another four days, and he accompanied me on a couple shopping trips as well as some more sightseeing. The highlight was a trip to the Neanderthal Museum in the aptly-named Neandertal, Germany, and then afterward we met his friend (and UbuCon registration desk volunteer!) Philipp Schmidt in Düsseldorf at their Weihnachtsmarkt, where we tried the Feuerzangenbowle, where mulled wine is improved by soaking a block of sugar in rum, then putting it over the wine and lighting the sugarloaf on fire to drip into the wine. After that, we went to the Brauerei Schumacher where I enjoyed not only Schumacher Alt beer, but also the Rhein-style sauerbraten that has been on my to-do list for a decade and a half. (Other variations of sauerbraten—not to mention beer—remain on the list!)

I’d like to thank Sujeevan for his hospitality on top of the tremendous job that he, the German LoCo, and the French LoCo exerted to make the first UbuCon Europe a stunning success. I’d also like to thank everyone who contributed to the Ubuntu Community Reimbursement Fund for helping out with my travel expenses, and everyone who attended, because of course we put everything together for you to enjoy.

on December 08, 2016 05:04 AM

December 07, 2016

LXD logo

Introduction

The LXD and AppArmor teams have been working to support loading AppArmor policies inside LXD containers for a while. This support which finally landed in the latest Ubuntu kernels now makes it possible to install snap packages.

Snap packages are a new way of distributing software, directly from the upstream and with a number of security features wrapped around them so that these packages can’t interfere with each other or cause harm to your system.

Requirements

There are a lot of moving pieces to get all of this working. The initial enablement was done on Ubuntu 16.10 with Ubuntu 16.10 containers, but all the needed bits are now progressively being pushed as updates to Ubuntu 16.04 LTS.

The easiest way to get this to work is with:

  • Ubuntu 16.10 host
  • Stock Ubuntu kernel (4.8.0)
  • Stock LXD (2.4.1 or higher)
  • Ubuntu 16.10 container with “squashfuse” manually installed in it

Installing the nextcloud snap

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.

lxc launch ubuntu:16.10 nextcloud
lxc exec nextcloud -- apt update
lxc exec nextcloud -- apt dist-upgrade -y
lxc exec nextcloud -- apt install squashfuse -y

And then, lets install that “nextcloud” snap with:

lxc exec nextcloud -- snap install nextcloud

Finally, grab the container’s IP and access “http://<IP>” with your web browser:

stgraber@castiana:~$ lxc list nextcloud
+-----------+---------+----------------------+----------------------------------------------+
|    NAME   |  STATE  |         IPV4         |                     IPV6                     |
+-----------+---------+----------------------+----------------------------------------------+
| nextcloud | RUNNING | 10.148.195.47 (eth0) | fd42:ee2:5d34:25c6:216:3eff:fe86:4a49 (eth0) |
+-----------+---------+----------------------+----------------------------------------------+

Nextcloud Login screen

Installing the LXD snap in a LXD container

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.
This time with support for nested containers.

lxc launch ubuntu:16.10 lxd -c security.nesting=true
lxc exec lxd -- apt update
lxc exec lxd -- apt dist-upgrade -y
lxc exec lxd -- apt install squashfuse -y

Now lets clear the LXD that came pre-installed with the container so we can replace it by the snap.

lxc exec lxd -- apt remove --purge lxd lxd-client -y

Because we already have a stable LXD on the host, we’ll make things a bit more interesting by installing the latest build from git master rather than the latest stable release:

lxc exec lxd -- snap install lxd --edge

The rest is business as usual for a LXD user:

stgraber@castiana:~$ lxc exec lxd bash
root@lxd:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no) [default=yes]?
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

root@lxd:~# lxd.lxc launch images:archlinux arch
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

Creating arch
Starting arch

root@lxd:~# lxd.lxc list
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                      IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| arch | RUNNING | 10.106.137.64 (eth0) | fd42:2fcd:964b:eba8:216:3eff:fe8f:49ab (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+

And that’s it, you now have the latest LXD build installed inside a LXD container and running an archlinux container for you. That LXD build will update very frequently as we publish new builds to the edge channel several times a day.

Conclusion

It’s great to have snaps now install properly inside LXD containers. Production users can now setup hundreds of different containers, network them the way they want, setup their storage and resource limits through LXD and then install snap packages inside them to get the latest upstream releases of the software they want to run.

That’s not to say that everything is perfect yet. This is all built on some really recent kernel work, using unprivileged FUSE filesystem mounts and unprivileged AppArmor profile stacking and namespacing. There very likely still are some issues that need to get resolved in order to get most snaps to work identically to when they’re installed directly on the host.

If you notice discrepancies between a snap running directly on the host and a snap running inside a LXD container, you’ll want to look at the “dmesg” output, looking for any DENIED entry in there which would indicate AppArmor rejecting some request from the snap.

This typically indicates either a bug in AppArmor itself or in the way the AppArmor profiles are generated by snapd. If you find one of those issues, you can report it in #snappy on irc.freenode.net or file a bug at https://launchpad.net/snappy/+filebug so it can be investigated.

Extra information

More information on snap packages can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

on December 07, 2016 02:37 PM

December 06, 2016

Welcome to the Ubuntu Weekly Newsletter. This is issue #490 for the week November 28 – December 4, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Chris Guiver
  • Elizabeth K. Joseph
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on December 06, 2016 04:17 AM

December 05, 2016

Again if you missed it, IronFunctions is open-source, lambda compatible, on-premise, language agnostic, server-less compute service.

While AWS Lambda only supports Java, Python and Node.js, Iron Functions allows you to use any language you desire by running your code in containers.

With Microsoft being one of the biggest players in open source and .NET going cross-platform it was only right to add support for it in the IronFunctions's fn tool.

TL;DR:

The following demos a .NET function that takes in a URL for an image and generates a MD5 checksum hash for it:

Using dotnet with functions

Make sure you downloaded and installed dotnet. Now create an empty dotnet project in the directory of your function:

dotnet new  

By default dotnet creates a Program.cs file with a main method. To make it work with IronFunction's fn tool please rename it to func.cs.

mv Program.cs func.cs  

Now change the code as you desire to do whatever magic you need it to do. In our case the code takes in a URL for an image and generates a MD5 checksum hash for it. The code is the following:

using System;  
using System.Text;  
using System.Security.Cryptography;  
using System.IO;

namespace ConsoleApplication  
{
    public class Program
    {
        public static void Main(string[] args)
        {
            // if nothing is being piped in, then exit
            if (!IsPipedInput())
                return;

            var input = Console.In.ReadToEnd();
            var stream = DownloadRemoteImageFile(input);
            var hash = CreateChecksum(stream);
            Console.WriteLine(hash);
        }

        private static bool IsPipedInput()
        {
            try
            {
                bool isKey = Console.KeyAvailable;
                return false;
            }
            catch
            {
                return true;
            }
        }
        private static byte[] DownloadRemoteImageFile(string uri)
        {

            var request = System.Net.WebRequest.CreateHttp(uri);
            var response = request.GetResponseAsync().Result;
            var stream = response.GetResponseStream();
            using (MemoryStream ms = new MemoryStream())
            {
                stream.CopyTo(ms);
                return ms.ToArray();
            }
        }
        private static string CreateChecksum(byte[] stream)
        {
            using (var md5 = MD5.Create())
            {
                var hash = md5.ComputeHash(stream);
                var sBuilder = new StringBuilder();

                // Loop through each byte of the hashed data
                // and format each one as a hexadecimal string.
                for (int i = 0; i < hash.Length; i++)
                {
                    sBuilder.Append(hash[i].ToString("x2"));
                }

                // Return the hexadecimal string.
                return sBuilder.ToString();
            }
        }
    }
}

Note: IO with an IronFunction is done via stdin and stdout. This code

Using with IronFunctions

Let's first init our code to become IronFunctions deployable:

fn init <username>/<funcname>  

Since IronFunctions relies on Docker to work (we will add rkt support soon) the <username> is required to publish to docker hub. The <funcname> is the identifier of the function.

In our case we will use dotnethash as the <funcname>, so the command will look like:

fn init seiflotfy/dotnethash  

When running the command it will create the func.yaml file required by functions, which can be built by running:

Push to docker

fn push  

This will create a docker image and push the image to docker.

Publishing to IronFunctions

To publish to IronFunctions run ...

fn routes create <app_name>  

where <app_name> is (no surprise here) the name of the app, which can encompass many functions.

This creates a full path in the form of http://<host>:<port>/r/<app_name>/<function>

In my case, I will call the app myapp:

fn routes create myapp  

Calling

Now you can

fn call <app_name> <funcname>  

or

curl http://<host>:<port>/r/<app_name>/<function>  

So in my case

echo http://lorempixel.com/1920/1920/ | fn call myapp /dotnethash  

or

curl -X POST -d 'http://lorempixel.com/1920/1920/'  http://localhost:8080/r/myapp/dotnethash  

What now?

You can find the whole code in the examples on GitHub. Feel free to join the Iron.io Team on Slack.
Feel free to write your own examples in any of your favourite programming languages such as Lua or Elixir and create a PR :)

on December 05, 2016 10:15 PM

We have survived two testing days, and now we can safely say that it will become a Friday tradition :)

Last Friday our nice guest was Aaron Ogle, from Rocket Chat. He gave us a tour on the Rocket Chat UI and we discussed about how they packaged it as a snap.

If you missed it, click the image below to watch it.

Alt text

Building from what we saw on the first session, we tested the snap using a virtual machine again. But this time, we cloned it to keep a pristine machine and make following testing sessions faster. If you want to help the Ubuntu and Rocket Chat communities, this is an easy way to prepare your environment:

Once you have your clone ready, install the most recent and bleeding edge version of rocket chat with:

$ sudo snap install rocketchat-server --edge

Then you can follow this gist with the initial steps to start testing the Rocket Chat snap

Also you can test a real installation of Rocket Chat, joining our community channel, where we are available all day, every day. If you have a question, just ask. I am elopio in there.

During the session we took a look at the GitHub website, where many free software communities do their development in the open. They have a great guide to start contributing to open source projects. Go on and spread your love for free software in the form of bug reports :)

The gratitude this week goes to our newly acquired staff members Julia and Kyle, and of course to Aaron for letting us have a funny Friday evening. Make sure to take a look at the cool things he and his teammates are doing; and if you have some free time and want to join an exciting, open and nice community, give them a hand. Also try the Jitsi integration for video conference, it's mind-blowing that there are no closed components anywhere.

See you next Friday at Ubuntu On-Air.

on December 05, 2016 05:04 PM

In the previous post on Snapping KDE Applications we looked at the high-level implication and use of the KDE Frameworks 5 content snap to snapcraft snap bundles for binary distribution. Today I want to get a bit more technical and look at the actual building and inner workings of the content snap itself.

The KDE Frameworks 5 snap is a content snap. Content snaps are really just ordinary snaps that define a content interface. Namely, they expose part or all of their file tree for use by another snap but otherwise can be regular snaps and have their own applications etc.

KDE Frameworks 5’s snap is special in terms of size and scope. The whole set of KDE Frameworks 5, combined with Qt 5, combined with a large chunk of the graphic stack that is not part of the ubuntu-core snap. All in all just for the Qt5 and KF5 parts we are talking about close to 100 distinct source tarballs that need building to compose the full frameworks stack. KDE is in the fortunate position of already having builds of all these available through KDE neon. This allows us to simply repack existing work into the content snap. This is for the most part just as good as doing everything from scratch, but has the advantage of saving both maintenance effort and build resources.

I do love automation, so the content snap is built by some rather stringy proof of concept code that automatically translates the needed sources into a working snapcraft.yaml that repacks the relevant KDE neon debs into the content snap.

Looking at this snapcraft.yaml we’ll find some fancy stuff.

After the regular snap attributes the actual content-interface is defined. It’s fairly straight forward and simply exposes the entire snap tree as kde-frameworks-5-all content. This is then used on the application snap side to find a suitable content snap so it can access the exposed content (i.e. in our case the entire file tree).

slots:
    kde-frameworks-5-slot:
        content: kde-frameworks-5-all
        interface: content
        read:
        - "."

The parts of the snap itself are where the most interesting things happen. To make things easier to read and follow I’ll only show the relevant excerpts.

The content snap consists of the following parts: kf5, kf5-dev, breeze, plasma-integration.

The kf5 part is the meat of the snap. It tells snapcraft to stage the binary runtime packages of KDE Frameworks 5 and Qt 5. This effectively makes snapcraft pack the named debs along with necessary dependencies into our snap.

    kf5:
        plugin: nil
        stage-packages:
          - libkf5coreaddons5
        ...

The kf5-dev part looks almost like the kf5 part but has entirely different functionality. Instead of staging the runtime packages it stages the buildtime packages (i.e. the -dev packages). It additionally has a tricky snap rule which excludes everything from actually ending up in the snap. This is a very cool tricky, this effectively means that the buildtime packages will be in the stage and we can build other parts against them, but we won’t have any of them end up in the final snap. After all, they would be entirely useless there.

    kf5-dev:
        after:
          - kf5
        plugin: nil
        stage-packages:
          - libkf5coreaddons-dev
        ....
        snap:
          - "-*"

Besides those two we also build two runtime integration parts entirely from scratch breeze and plasma-integration. They aren’t actually needed, but ensure sane functionality in terms of icon theme selection etc. These are ordinary build parts that simply rely on the kf5 and kf5-dev parts to provide the necessary dependencies.

An important question to ask here is how one is meant to build against this now. There is this kf5-dev part, but it does not end up in the final snap where it would be entirely useless anyway as snaps are not used at buildtime. The answer lies in one of the rigging scripts around this. In the snapcraft.yaml we configured the kf5-dev part to stage packages but then excluded everything from being snapped. However, knowing how snapcraft actually goes about its business we can “abuse” its inner workings to make use of the part after all. Before the actual snap is created snapcraft “primes” the snap, this effectively means that all installed trees (i.e. the stages) are combined into one tree (i.e. the primed tree), the exclusion rule of the kf5-dev part is then applied on this tree. Or in other words: the primed tree is the snap before exclusion was applied. Meaning the primed tree is everything from all parts, including the development headers and CMake configs. We pack this tree in a development tarball which we then use on the application side to stage a development environment for the KDE Frameworks 5 snap.

Specifically on the application-side we use a boilerplate part that employs the same trick of stage-everything but snap-nothing to provide the build dependencies while not having anything end up in the final snap.

  kde-frameworks-5-dev:
    plugin: dump
    snap: [-*]
    source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz

Using the KDE Framworks 5 content snap KDE can create application snaps that are a fraction of the size they would be if they contained all dependencies themselves. While this does give up optimization potential by aggregating requirements in a more central fashion it quickly starts paying off given we are saving upwards of 70 MiB per snap.

Application snaps can of course still add more stuff on top or even override things if needed.

Finally, as we approach the end of the year, we begin the season of giving. What would suit the holidays better than giving to the entire world by supporting KDE with a small donation?
postcard02

on December 05, 2016 04:10 PM

Yakkety Yak release parties

Rafael Carreras

Catalan LoCo Team celebrated on November 5th a release party of the next Ubuntu version, in that case, 16.10 Xenial Xerus, in Ripoll, such a historical place. As always, we started explaining what Ubuntu is and how it adapts to new times and devices.

FreeCad 3D design and Games were both present at the party.

A few weeks later, in December 3rd, we did another release party, this time in Barcelona.

We went to Soko, previously a chocolate factory, that nowadays is a kind of Makers Lab, very excited about free software. First, Josep explained the current developments in Ubuntu and we carried some installations on laptops.

We ate some pizza and had discussions about free software on public administrations. Apart from the usual users who came to install Ubuntu on their computers, the responsible from Soko gave us 10 laptops for Ubuntu installation too. We ended the tasks installing Wine for some Lego to run.

That’s some art that is being made at Soko.

I’m releasing that post because we need some documentation on release parties. If you need some advice on how to manage a release party, you can contact me or anyone in Ubuntu community.

 

on December 05, 2016 01:44 PM

December 04, 2016

As the name implies, “service-learning is an educational approach that combines learning objectives with community service in order to provide a pragmatic, progressive learning experience while meeting societal needs” (Wikipedia).  When you add the “community” part to that definition it changes to, “about leadership development as well as traditional information and skill acquisition” (Janet 1999).

How does this apply to Open * communities?

Simple!  Community service learning is an ideal way to get middle/high school/college students to get involved within the various communities and understand the power of Open *. And also to stay active after their term of community service learning.

This idea came to me just today (as of writing, Nov. 30th) as a thought on what is really Open *.  Not the straightforward definition of it but the the affect Open * creates.  As I stated on my home page of my site, Open * creates a sense of empowerment.  One way is through the actions that create skills and improvements to those skills.  Which skills are those?  Mozilla Learning made a map and description to these skills on their Web Literacy pages.  They are show below also:

screenshot-from-2016-11-30-19-07-22Most of these skills along with the ways to gain these skills (read, write, participate) can be used as skills to worked on for community service learning.

As stated above, community service learning is really the focus of gaining skills and leadership skills while (in the Open * sense) contribute to projects that impacts the society of the world.  This is really needed now as there are many local and world issues that Open * can provide solutions too.

I see this as an outreach program for schools and the various organizations/groups such as Ubuntu, System76, Mozilla, and even Linux Padawan.  Unlike Google Summer of Code (GSoC), no one receives a stipend but the idea of having a mentor could be taken from GSoC.  No, not could but should.  Because the student needs someone to guide them, hence Linux Padawan could benefit from this idea.

Having that said, I will try to work out a sample program that could be used and maybe test it with Linux Padawan.  Maybe I could have this ready by spring semester.

Random Fact #1: Simon Quigley, through his middle school, is in a way already doing this type of learning.

Random Fact #2: At one point of time, I wanted to translate that Web Literacy map into one that can be applied to Open *, not just one topic.

on December 04, 2016 09:35 PM

Releasing ISV applications on Linux is often hard. The ABI of all the libraries you need changes seemingly weekly. Hence you have the option of bundling the world, or building a thousand releases to cover a thousand distribution versions. As a case in point, when MonoDevelop started bundling a C Git library instead of using a C# git implementation, it gained dependencies on all sorts of fairly weak ABI libraries whose exact ABI mix was not consistent across any given pair of distro releases. This broke our policy of releasing “works on anything” .deb and .rpm packages. As a result, I pretty much gave up on packaging MonoDevelop upstream with version 5.10.

Around the 6.1 release window, I decided to take re-evaluate question. I took a closer look at some of the fancy-pants new distribution methods that get a lot of coverage in the Linux press: Snap, AppImage, and Flatpak.

I started with AppImage. It’s very good and appealing for its specialist areas (no external requirements for end users), but it’s kinda useless at solving some of our big areas (the ABI-vs-bundling problem, updating in general).

Next, I looked at Flatpak (once xdg-app). I liked the concept a whole lot. There’s a simple 3-tier dependency hierarchy: Applications, Runtimes, and Extensions. An application depends on exactly one runtime.  Runtimes are root-level images with no dependencies of their own. Extensions are optional add-ons for applications. Anything not provided in your target runtime, you bundle. And an integrated updates mechanism allows for multiple branches and multiple releases parallel-installed (e.g. alpha & stable, easily switched).

There’s also security-related sandboxing features, but my main concerns on a first examination were with the dependency and distribution questions. That said, some users might be happier running Microsoft software on their Linux desktop if that software is locked up inside a sandbox, so I’ve decided to embrace that functionality rather than seek to avoid it.

I basically stopped looking at this point (sorry Snap!). Flatpak provided me with all the functionality I wanted, with an extremely helpful and responsive upstream. I got to work on trying to package up MonoDevelop.

Flatpak (optionally!) uses a JSON manifest for building stuff. Because Mono is still largely stuck in a Gtk+2 world, I opted for the simplest runtime, org.freedesktop.Runtime, and bundled stuff like Gtk+ into the application itself.

Some gentle patching here & there resulted in this repository. Every time I came up with an exciting new edge case, upstream would suggest a workaround within hours – or failing that, added new features to Flatpak just to support my needs (e.g. allowing /dev/kvm to optionally pass through the sandbox).

The end result is, as of the upcoming 0.8.0 release of Flatpak, from a clean install of the flatpak package to having a working MonoDevelop is a single command: flatpak install --user --from https://download.mono-project.com/repo/monodevelop.flatpakref 

For the current 0.6.x versions of Flatpak, the user also needs to flatpak remote-add --user --from gnome https://sdk.gnome.org/gnome.flatpakrepo first – this step will be automated in 0.8.0. This will download org.freedesktop.Runtime, then com.xamarin.MonoDevelop; export icons ‘n’ stuff into your user environment so you can just click to start.

There’s some lingering experience issues due the sandbox which are on my radar. “Run on external console” doesn’t work, for example, or “open containing folder”. There are people working on that (a missing DBus# feature to allow breaking out of the sandbox). But overall, I’m pretty happy. I won’t be entirely satisfied until I have something approximating feature equivalence to the old .debs.  I don’t think that will ever quite be there, since there’s just no rational way to allow arbitrary /usr stuff into the sandbox, but it should provide a decent basis for a QA-able, supportable Linux MonoDevelop. And we can use this work as a starting point for any further fancy features on Linux.

Gtk# app development in Flatpak MonoDevelop

Editing MonoDevelop in MonoDevelop. *Inception noise*

on December 04, 2016 10:44 AM

December 03, 2016

So much for my monthly blogging! Here’s what I have been up to in the Open Source world over the last 6 months.

Debian

  • Uploaded a new version of the debian-multimedia blends metapackages
  • Uploaded the latest abcmidi
  • Uploaded the latest node-process-nextick-args
  • Prepared version 1.0.2 of libdrumstick for experimental, as a first step for the transition. It was sponsored by James Cowgill.
  • Prepared a new node-inline-source-map package, which was sponsored by Gianfranco Costamagna.
  • Uploaded kmetronome to experimental as part of the libdrumstick transition.
  • Prepared a new node-js-yaml package, which was sponsored by Gianfranco Costamagna.
  • Uploaded version 4.2.4 of Gramps.
  • Prepared a new version of vmpk which I am going to adopt, as part of the libdrumstick transition. I tried splitting the documentation into a separate package, but this proved difficult, and in the end I missed the transition freeze deadline for Debian Stretch.
  • Prepared a backport of Gramps 4.2.4, which was sponsored by IOhannes m zmölnig as Gramps is new for jessie-backports.
  • Began a final push to get kosmtik packaged and into the NEW queue before the impending Debian freeze for Stretch. Unfortunately, many dependencies need updating, which also depend on packages not yet in Debian. Also pushed to finish all the new packages for node-tape, which someone else has decided to take responsibility for.
  • Uploaded node-cross-spawn-async to fix a Release Critical bug.
  • Prepared  a new node-chroma-js package,  but this is unfortunately blocked by several out of date & missing dependencies.
  • Prepared a new node-husl package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-resumer package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-object-inspect package, which was sponsored by Gianfranco Costamagna.
  • Removed node-string-decoder from the archive, as it was broken and turned out not to be needed anymore.
  • Uploaded a fix for node-inline-source-map which was failing tests. This turned out to be due to node-tap being upgraded to version 8.0.0. Jérémy Lal very quickly provided a fix in the form of a Pull Request upstream, so I was able to apply the same patch in Debian.

Ubuntu

  • Prepared a merge of the latest blends package from Debian in order to be able to merge the multimedia-blends package later. This was sponsored by Daniel Holbach.
  • Prepared an application to become an Ubuntu Contributing Developer. Unfortunately, this was later declined. I was completely unprepared for the Developer Membership Board meeting on IRC after my holiday. I had had no time to chase for endorsements from previous sponsors, and the application was not really clear about the fact that I was not actually applying for upload permission yet. No matter, I intend to apply again later once I have more evidence & support on my application page.
  • Added my blog to Planet Ubuntu, and this will hopefully be the first post that appears there.
  • Prepared a merge of the latest debian-multimedia blends meta-package package from Debian. In Ubuntu Studio, we have the multimedia-puredata package seeded so that we get all the latest Puredata packages in one go. This was sponsored by Michael Terry.
  • Prepared a backport of Ardour as part of the Ubuntu Studio plan to do regular backports. This is still waiting for sponsorship if there is anyone reading this that can help with that.
  • Did a tweak to the Ubuntu Studio seeds and prepared an update of the Ubuntu Studio meta-packages. However, Adam Conrad did the work anyway as part of his cross-flavour release work without noticing my bug & request for sponsorship. So I closed the bug.
  • Updated the Ubuntu Studio wiki to expand on the process for updating our seeds and meta-packages. Hopefully, this will help new contributors to get involved in this area in the future.
  • Took part in the testing and release of the Ubuntu Studio Trusty 14.04.5 point release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety Beta 1 release.
  • Prepared a backport of Ansible but before I could chase up what to do about the fact that ansible-fireball was no longer part of the Ansible package, some one else did the backport without noticing my bug. So I closed the bug.
  • Prepared an update of the Ubuntu Studio meta-packages. This was sponsored by Jeremy Bicha.
  • Prepared an update to the ubuntustudio-default-settings package. This switched the Ubuntu Studio desktop theme to Numix-Blue, and reverted some commits to drop the ubuntustudio-lightdm-theme package fom the archive. This had caused quite a bit of controversy and discussion on IRC due to the transition being a little too close to the release date for Yakkety. This was sponsored by Iain Lane (Laney).
  • Prepared the Numix Blue update for the ubuntustudio-lightdm-theme package. This was also sponsored by Iain Lane (Laney). I should thank Krytarik here for the initial Numix Blue theme work here (on the lightdm theme & default settings packages).
  • Provided a patch for gfxboot-theme-ubuntu which has a bug which is regularly reported during ISO testing, because the “Try Ubuntu Studio without installing” option was not a translatable string and always appeared in English. Colin Watson merged this, so hopefully it will be translated by the time of the next release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety 16.10 release.
  • After a hint from Jeremy Bicha, I prepared a patch that adds a desktop file for Imagemagick to the ubuntustudio-default-settings package. This will give us a working menu item in Ubuntu Studio whilst we wait for the bug to be fixed upstream in Debian. Next month I plan to finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition, including dropping ubuntustudio-lightdm-theme from the Ubuntu Studio seeds. I will include this fix at the same time.

Other

  • At other times when I have had a spare moment, I have been working on resurrecting my old Family History website. It was originally produced in my Windows XP days, and I was no longer able to edit it in Linux. I decided to convert it to Jekyll. First I had to extract the old HTML from where the website is hosted using the HTTrack Website Copier. Now, I am in the process of switching the structure to the standard Jekyll template approach. I will need to switch to a nice Jekyll based theme, as as the old theming was pretty complex. I pushed the code to my Github repository for safe keeping.

Plan for December

Debian

Before the 5th January 2017 Debian Stretch soft freeze I hope to:

Ubuntu

  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages.
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages.
  • Reapply to become a Contributing Developer.
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in.

Other

  • Continue working to convert my Family History website to Jekyll.
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.

on December 03, 2016 11:52 AM

December 02, 2016

Hi!

I’ve uploaded Mesa 12.0.4 for xenial and yakkety to my testing PPA for you to try out. 16.04 shipped with 11.2.0 so it’s a slightly bigger update there, while yakkety is already on 12.0.3 but the new version should give radeon users a 15% performance boost in certain games with complex shaders.

Please give it a spin and report to the (yakkety) SRU bug if it works or not, and mention the GPU you tested with. At least Intel Skylake seems to still work fine here.

 


on December 02, 2016 10:28 PM
En la Ubucon Europe pude conocer de primera mano los avances de Ubuntu Touch en el Fairphone 2.

Ubuntu Touch & Fairphone 2
El Fairphone 2 es un móvil único. Como su propio nombre indica, es un móvil ético con el mundo. No usa mano de obra infantil, construido con minerales por los que no corrió la sangre y que incluso se preocupa por los residuos que genera.

Delantera y trasera
En el apartado de software ejecuta varios sistemas operativos, y por fin, Ubuntu es uno de ellos.

Tu elección
El port de Ubuntu está implementado por el proyecto UBPorts, que está avanzando a pasos de gigante cada semana.

Cuando yo probé el móvil, me sorprendió la velocidad de Unity, similar a la de mi BQ E4.5.
La cámara es suficientemente buena. Y la duración de la batería es aceptable.
Me encantó especialmente la calidad de la pantalla, con sólo mirarla se nota su nitidez.
Respecto a las aplicaciones, probé varias de la Store sin problema.

Carcasa
En resumen, un gran sistema operativo, para un gran móvil :) Un win:win

Si te interesa colaborar como desarrollador de este port, te recomiendo este grupo de Telegram: https://telegram.me/joinchat/AI_ukwlaB6KCsteHcXD0jw

All images are CC BY-SA 2.0.
on December 02, 2016 05:54 PM

This is largely based on a presentation I gave a couple of weeks ago. If you are too lazy to read, go watch it instead😉

For 20 years KDE has been building free software for the world. As part of this endeavor, we created a collection of libraries to assist in high-quality C++ software development as well as building highly integrated graphic applications on any operating system. We call them the KDE Frameworks.

With the recent advance of software bundling systems such as Snapcraft and Flatpak, KDE software maintainers are however a bit on the spot. As our software is building on such a vast collection of frameworks and supporting technology, the individual size of a distributable application can be quite abysmal.

When we tried to package our calculator KCalc as a snap bundle, we found that even a relatively simple application like this, makes for a good 70 MiB snap to be in a working state (most of this is the graphical stack required by our underlying C++ framework, Qt).
Since then a lot of effort was put into devising a system that would allow us to more efficiently deal with this. We now have a reasonably suitable solution on the table.

The KDE Frameworks 5 content snap.

A content snap is a special bundle meant to be mounted into other bundles for the purpose of sharing its content. This allows us to share a common core of libraries and other content across all applications, making the individual applications just as big as they need to be. KCalc is only 312 KiB without translations.

The best thing is that beside some boilerplate definitions, the snapcraft.yaml file defining how to snap the application is like a regular snapcraft file.

Let’s look at how this works by example of KAlgebra, a calculator and mathematical function plotter:

Any snapcraft.yaml has some global attributes we’ll want to set for the snap

name: kalgebra
version: 16.08.2
summary: ((TBD))
description: ((TBD))
confinement: strict
grade: devel

We’ll want to define an application as well. This essentially allows snapd to expose and invoke our application properly. For the purpose of content sharing we will use a special start wrapper called kf5-launch that allows us to use the content shared Qt and KDE Frameworks. Except for the actual application/binary name this is fairly boilerplate stuff you can use for pretty much all KDE applications.

apps:
  kalgebra:
    command: kf5-launch kalgebra
    plugs:
      - kde-frameworks-5-plug # content share itself
      - home # give us a dir in the user home
      - x11 # we run with xcb Qt platform for now
      - opengl # Qt/QML uses opengl
      - network # gethotnewstuff needs network IO
      - network-bind # gethotnewstuff needs network IO
      - unity7 # notifications
      - pulseaudio # sound notifications

To access the KDE Frameworks 5 content share we’ll then want to define a plug our application can use to access the content. This is always the same for all applications.

plugs:
  kde-frameworks-5-plug:
    interface: content
    content: kde-frameworks-5-all
    default-provider: kde-frameworks-5
    target: kf5

Once we got all that out of the way we can move on to actually defining the parts that make up our snap. For the most part parts are build instructions for the application and its dependencies. With content shares there are two boilerplate parts you want to define.

The development tarball is essentially a fully built kde frameworks tree including development headers and cmake configs. The tarball is packed by the same tech that builds the actual content share, so this allows you to build against the correct versions of the latest share.

  kde-frameworks-5-dev:
    plugin: dump
    snap: [-*]
    source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz

The environment rigging provide the kf5-launch script we previously saw in the application’s definition, we’ll use it to execute the application within a suitable environment. It also gives us the directory for the content share mount point.

  kde-frameworks-5-env:
    plugin: dump
    snap: [kf5-launch, kf5]
    source: http://github.com/apachelogger/kf5-snap-env.git

Lastly, we’ll need the actual application part, which simply instructs that it will need the dev part to be staged first and then builds the tarball with boilerplate cmake config flags.

  kalgebra:
    after: [kde-frameworks-5-dev]
    plugin: cmake
    source: http://download.kde.org/stable/applications/16.08.2/src/kalgebra-16.08.2.tar.xz
    configflags:
      - "-DKDE_INSTALL_USE_QT_SYS_PATHS=ON"
      - "-DCMAKE_INSTALL_PREFIX=/usr"
      - "-DCMAKE_BUILD_TYPE=Release"
      - "-DENABLE_TESTING=OFF"
      - "-DBUILD_TESTING=OFF"
      - "-DKDE_SKIP_TEST_SETTINGS=ON"

Putting it all together we get a fairly standard snapcraft.yaml with some additional boilerplate definitions to wire it up with the content share. Please note that the content share is using KDE neon’s Qt and KDE Frameworks builds, so, if you want to try this and need additional build-packages or stage-packages to build a part you’ll want to make sure that KDE neon’s User Edition archive is present in the build environments sources.list deb http://archive.neon.kde.org/user xenial main. This is going to get a more accessible centralized solution for all of KDE soon™.

name: kalgebra
version: 16.08.2
summary: ((TBD))
description: ((TBD))
confinement: strict
grade: devel

apps:
  kalgebra:
    command: kf5-launch kalgebra
    plugs:
      - kde-frameworks-5-plug # content share itself
      - home # give us a dir in the user home
      - x11 # we run with xcb Qt platform for now
      - opengl # Qt/QML uses opengl
      - network # gethotnewstuff needs network IO
      - network-bind # gethotnewstuff needs network IO
      - unity7 # notifications
      - pulseaudio # sound notifications

plugs:
  kde-frameworks-5-plug:
    interface: content
    content: kde-frameworks-5-all
    default-provider: kde-frameworks-5
    target: kf5

parts:
  kde-frameworks-5-dev:
    plugin: dump
    snap: [-*]
    source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz
  kde-frameworks-5-env:
    plugin: dump
    snap: [kf5-launch, kf5]
    source: http://github.com/apachelogger/kf5-snap-env.git
  kalgebra:
    after: [kde-frameworks-5-dev]
    plugin: cmake
    source: http://download.kde.org/stable/applications/16.08.2/src/kalgebra-16.08.2.tar.xz
    configflags:
      - "-DKDE_INSTALL_USE_QT_SYS_PATHS=ON"
      - "-DCMAKE_INSTALL_PREFIX=/usr"
      - "-DCMAKE_BUILD_TYPE=Release"
      - "-DENABLE_TESTING=OFF"
      - "-DBUILD_TESTING=OFF"
      - "-DKDE_SKIP_TEST_SETTINGS=ON"

Now to install this we’ll need the content snap itself. Here is the content snap. To install it a command like sudo snap install --force-dangerous kde-frameworks-5_*_amd64.snap should get you going. Once that is done one can install the kalgebra snap. If you are a KDE developer and want to publish your snap on the store get in touch with me so we can get you set up.

The kde-frameworks-5 content snap is also available in the edge channel of the Ubuntu store. You can try the games kblocks and ktuberling like so:

sudo snap install --edge kde-frameworks-5
sudo snap install --edge --devmode kblocks
sudo snap install --edge --devmode ktuberling

If you want to be part of making the world a better place, or would like a KDE-themed postcard, please consider donating a penny or two to KDE

postcard04

on December 02, 2016 02:44 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

In the 11 hours of (paid) work I had to do, I managed to release DLA-716-1 aka tiff 4.0.2-6+deb7u8 fixing CVE-2016-9273, CVE-2016-9297 and CVE-2016-9532. It looks like this package is currently getting new CVE every month.

Then I spent quite some time to review all the entries in dla-needed.txt. I wanted to get rid of some misleading/no longer applicable comments and at the same time help Olaf who was doing LTS frontdesk work for the first time. I ended up tagging quite a few issues as no-dsa (meaning that we will do nothing for them as they are not serious enough) such as those affecting dwarfutils, dokuwiki, irssi. I dropped libass since the open CVE is disputed and was triaged as unimportant. While doing this, I fixed a bug in the bin/review-update-needed script that we use to identify entries that have not made any progress lately.

Then I claimed libgc and and released DLA-721-1 aka libgc 1:7.1-9.1+deb7u1 fixing CVE-2016-9427. The patch was large and had to be manually backported as it was not applying cleanly.

The last thing I did was to test a new imagemagick and review the update prepared by Roberto.

pkg-security work

The pkg-security team is continuing its good work: I sponsored patator to get rid of a useless dependency on pycryptopp which was going to be removed from testing due to #841581. After looking at that bug, it turns out the bug was fixed in libcrypto++ 5.6.4-3 and I thus closed it.

I sponsored many uploads: polenum, acccheck, sucrack (minor updates), bbqsql (new package imported from Kali). A bit later I fixed some issues in the bbsql package that had been rejected from NEW.

I managed a few RC bugs related to the openssl 1.1 transition: I adopted sslsniff in the team and fixed #828557 by build-depending on libssl1.0-dev after having opened the proper upstream ticket. I did the same for ncrack and #844303 (upstream ticket here). Someone else took care of samdump2 but I still adopted the package in the pkg-security team as it is a security relevant package. I also made an NMU for axel and #829452 (it’s not pkg-security related but we still use it in Kali).

Misc Debian work

Django. I participated in the discussion about a change letting Django count the number of developers that use it. Such a change has privacy implications and the discussion sparked quite some interest both in Debian mailing lists and up to LWN.

On a more technical level, I uploaded version 1.8.16-1~bpo8+1 to jessie-backports (security release) and I fixed RC bug #844139 by backporting two upstream commits. This led to the 1.10.3-2 upload. I ensured that this was fixed in the 1.10.x upstream branch too.

dpkg and merged /usr. While reading debian-devel, I discovered dpkg bug #843073 that was threatening the merged-/usr feature. Since the bug was in code that I wrote a few years ago, and since Guillem was not interested in fixing it, I spent an hour to craft a relatively clean patch that Guillem could apply. Unfortunately, Guillem did not yet manage to pull out a new dpkg release with the patches applied. Hopefully it won’t be too long until this happens.

Debian Live. I closed #844332 which was a request to remove live-build from Debian. While it was marked as orphaned, I was always keeping an eye on it and have been pushing small fixes to git. This time I decided to officially adopt the package within the debian-live team and work a bit more on it. I reviewed all pending patches in the BTS and pushed many changes to git. I still have some pending changes to finish to prettify the Grub menu but I plan to upload a new version really soon now.

Misc bugs filed. I filed two upstream tickets on uwsgi to help fix currently open RC bugs on the package. I filed #844583 on sbuild to support arbitrary version suffix for binary rebuild (binNMU). And I filed #845741 on xserver-xorg-video-qxl to get it fixed for the xorg 1.19 transition.

Zim. While trying to fix #834405 and update the required dependencies, I discovered that I had to update pygtkspellcheck first. Unfortunately, its package maintainer was MIA (missing in action) so I adopted it first as part of the python-modules team.

Distro Tracker. I fixed a small bug that resulted in an ugly traceback when we got queries with a non-ASCII HTTP_REFERER.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on December 02, 2016 11:45 AM

December 01, 2016

It’s Season Nine Episode Forty of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Dan Kermac are connected and speaking to your brain.

The same line up as last week are here again for another episode.

In this week’s show:

  • We discuss what we’ve been upto recently:
  • We review the nexdock and how it works with the Raspberry Pi 3, Meizu Pro 5 Ubuntu Phone, bq M10 FHD Ubuntu Tablet, Android, Dragonboard 410c, Roku, Chomecast, Amazon FireTV and laptops from Dell and Entroware.

  • We share a Command Line Lurve:

sudo apt install netdiscover
sudo netdiscover

The output looks something like this:

_____________________________________________________________________________
  IP            At MAC Address     Count     Len  MAC Vendor / Hostname
-----------------------------------------------------------------------------
192.168.2.2     fe:ed:de:ad:be:ef      1      42  Unknown vendor
192.168.2.1     da:d5:ba:be:fe:ed      1      60  TP-LINK TECHNOLOGIES CO.,LTD.
192.168.2.11    ba:da:55:c0:ff:ee      1      60  BROTHER INDUSTRIES, LTD.
192.168.2.30    02:02:de:ad:be:ef      1      60  Elitegroup Computer Systems Co., Ltd.
192.168.2.31    de:fa:ce:dc:af:e5      1      60  GIGA-BYTE TECHNOLOGY CO.,LTD.
192.168.2.107   da:be:ef:15:de:af      1      42  16)
192.168.2.109   b1:gb:00:bd:ba:be      1      60  Denon, Ltd.
192.168.2.127   da:be:ef:15:de:ad      1      60  ASUSTek COMPUTER INC.
192.168.2.128   ba:df:ee:d5:4f:cc      1      60  ASUSTek COMPUTER INC.
192.168.2.101   ba:be:4d:ec:ad:e5      1      42  Roku, Inc
192.168.2.106   ba:da:55:0f:f1:ce      1      42  LG Electronics
192.168.2.247   f3:3d:de:ad:be:ef      1      60  Roku, Inc
192.168.3.2     ba:da:55:c0:ff:33      1      60  Raspberry Pi Foundation
192.168.3.1     da:d5:ba:be:f3:3d      1      60  TP-LINK TECHNOLOGIES CO.,LTD.
192.168.2.103   da:be:ef:15:d3:ad      1      60  Unknown vendor
192.168.2.104   b1:gb:00:bd:ba:b3      1      42  Unknown vendor
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Flickr.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on December 01, 2016 03:00 PM

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes.

More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks.

On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets.

Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too.

For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI.

Which boxes to start with?

There are various considerations when going down this path:

  • Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy can be limited to those devices that are optimal for replacing the OS.
  • How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself?
  • Is a completely silent/fanless solution necessary?
  • Is it possibly to completely avoid embedded microcode and firmware?
  • How many other free software developers are using the same box, or will you be first?

Discussing these options

I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices.

on December 01, 2016 01:11 PM

November’s reading list

Canonical Design Team

Here are the best links shared by the design team during the month of November:

  1. BARBICAN | Urban Poetry
  2. The Future of Web Education
  3. A new algorithm for finding a visual center of a polygon
  4. This Map of the World Just Won Japan’s Prestigious Design Award
  5. Ten things I wish I knew as a UX Research team of one
  6. Designing a Usable Dashboard
  7. DesignOps at Airbnb: How we manage effective design at scale
  8. The Coming Revolution in Email Design

Thank you to Jamie, Karl, Matthew and me for the links this month!

on December 01, 2016 10:07 AM

Ubuntu Core Gadget Snaps

Zygmunt Krynicki

Gagdet snaps, the somewhat mysterious part of snappy that few people grok. Being a distinct snap type, next to kernel, os and the most common app types, it gets some special roles. If you are on a classic system like Ubuntu, Debian or Fedora you don't really need or have one yet. Looking at all-snap core devices you will always see one. In fact, each snappy reference platform has one. But where are they?

Up until now the gadget snaps were a bit hard to find. They were out there but you had to have a good amount of luck and twist your tongue at the right angle to find them. That's all changed now. If you look a https://github.com/snapcore you will see a nice, familiar pattern of devicename-gadget. Each repository is dedicated to one device so you will see a gadget snap for Raspberry Pi 2 or Pi 3, for example.

But there's more! Each of those github repositories is linked to a launchpad project that automatically mirrors the git repository, builds the snap and uploads it to the store and publishes the snap to the edge channel!

The work isn't over, as you will see the gadget snaps are mostly in binary form, hand-made to work but still a bit too mysterious. The Canonical Foundations team is working on building them in a way that is friendlier to community and easier to trace back to their source code origins.

If you'd like to learn more about this topic then have a look at the snapd wiki page for gadget snaps.
on December 01, 2016 08:29 AM

November 30, 2016

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices.

Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family.

This article describes some simple fooling around I did with this new service.

Deliver Amazon Polly Speech By Phone Call With Twilio

I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech.

The basic approach is:

  1. Generate the speech audio using Amazon Polly.

  2. Upload the resulting audio file to S3.

  3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected.

Here are some sample commands to accomplish this:

1- Generate Speech Audio With Amazon Polly

Here’s a simple example of how to turn text into speech, using the latest aws-cli:

text="Hello. This speech is generated using Amazon Polly. Enjoy!"
audio_file=speech.mp3

aws polly synthesize-speech \
  --output-format "mp3" \
  --voice-id "Salli" \
  --text "$text" \
  $audio_file

You can listen to the resulting output file using your favorite audio player:

mpg123 -q $audio_file

2- Upload Audio to S3

Create or re-use an S3 bucket to store the audio files temporarily.

s3bucket=YOURBUCKETNAME
aws s3 mb s3://$s3bucket

Upload the generated speech audio file to the S3 bucket. I use a long, random key for a touch of security:

s3key=audio-for-twilio/$(uuid -v4 -FSIV).mp3
aws s3 cp --acl public-read $audio_file s3://$s3bucket/$s3key

For easy cleanup, you can use a bucket with a lifecycle that automatically deletes objects after a day or thirty. See instructions below for how to set this up.

3- Initiate Call With Twilio

Once you have set up an account with Twilio (see pointers below if you don’t have one yet), here are sample commands to initiate a phone call and play the Amazon Polly speech audio:

from_phone="+1..." # Your Twilio allocated phone number
to_phone="+1..."   # Your phone number to call

TWILIO_ACCOUNT_SID="..." # Your Twilio account SID
TWILIO_AUTH_TOKEN="..."  # Your Twilio auth token

speech_url="http://s3.amazonaws.com/$s3bucket/$s3key"
twimlet_url="http://twimlets.com/message?Message%5B0%5D=$speech_url"

curl -XPOST https://api.twilio.com/2010-04-01/Accounts/$TWILIO_ACCOUNT_SID/Calls.json \
  -u "$TWILIO_ACCOUNT_SID:$TWILIO_AUTH_TOKEN" \
  --data-urlencode "From=$from_phone" \
  --data-urlencode "To=to_phone" \
  --data-urlencode "Url=$twimlet_url"

The Twilio web service will return immediately after queuing the phone call. It could take a few seconds for the call to be initiated.

Make sure you listen to the phone call as soon as you answer, as Twilio starts playing the audio immediately.

The ringspeak Command

For your convenience (actually for mine), I’ve put together a command line program that turns all the above into a single command. For example, I can now type things like:

... || ringspeak --to +1NUMBER "Please review the cron job failure messages"

or:

ringspeak --at 6:30am \
  "Good morning!" \
  "Breakfast is being served now in Venetian Hall G.." \
  "Werners keynote is at 8:30."

Twilio credentials, default phone numbers, S3 bucket configuration, and Amazon Polly voice defaults can be stored in a $HOME/.ringspeak file.

Here is the source for the ringspeak command:

https://github.com/alestic/ringspeak

Tip: S3 Setup

Here is a sample commands to configure an S3 bucket with automatic deletion of all keys after 1 day:

aws s3api put-bucket-lifecycle \
  --bucket "$s3bucket" \
  --lifecycle-configuration '{
    "Rules": [{
        "Status": "Enabled",
        "ID": "Delete all objects after 1 day",
        "Prefix": "",
        "Expiration": {
          "Days": 1
        }
  }]}'

This is convenient because you don’t have to worry about knowing when Twilio completes the phone call to clean up the temporary speech audio files.

Tip: Twilio Setup

This isn’t the place for an entire Twilio howto, but I will say that it is about this simple to set up:

  1. Create a Twilio account

  2. Reserve a phone number through Twilio.

  3. Find the ACCOUNT SID and AUTH TOKEN for use in Twilio API calls.

When you are using the Twilio free trial, it requires you to verify phone numbers before calling them. To call arbitrary numbers, enter your credit card and fund the minimum of $20.

Twilio will only charge you for what you use (about a dollar a month per phone number, about a penny per minute for phone calls, etc.).

Closing

A lot is possible when you start integrating Twilio with AWS. For example, my daughter developed an Alexa skill that lets her speak a message for a family member and have it delivered by phone. Alexa triggers her AWS Lambda function, which invokes the Twilio API to deliver the message by voice call.

With Amazon Polly, these types of voice applications can sound better than ever.

Original article and comments: https://alestic.com/2016/11/amazon-polly-text-to-speech/

on November 30, 2016 06:30 PM

Ohio LinuxFest 2016

Elizabeth K. Joseph

Last month I had the pleasure of finally attending an Ohio LinuxFest. The conference has been on my radar for years, but every year I seemed to have some kind of conflict. When my Tour of OpenStack Deployment Scenarios was accepted I was thrilled to finally be able to attend. My employer at the time also pitched in to the conference as a Bronze sponsor and by sending along a banner that showcased my talk, and my OpenStack book!

The event kicked off on Friday and the first talk I attended was by Jeff Gehlbach on What’s Happening with OpenNMS. I’ve been to several OpenNMS talks over the years and played with it some, so I knew the background of the project. This talk covered several of the latest improvements. Of particular note were some of their UI improvements, including both a website refresh and some stunning improvements to the WebUI. It was also interesting to learn about Newts, the time-series data store they’ve been developing to replace RRDtool, which they struggled to scale with their tooling. Newts is decoupled from the visualization tooling so you can hook in your own, like if you wanted to use Grafana instead.

I then went to Rob Kinyon’s Devs are from Mars, Ops are from Venus. He had some great points about communication between ops, dev and QA, starting with being aware and understanding of the fact that you all have different goals, which sometimes conflict. Pausing to make sure you know why different teams behave the way they do and knowing that they aren’t just doing it to make your life difficult, or because they’re incompetent, makes all the difference. He implored the audience to assume that we’re all smart, hard-working people trying to get our jobs done. He also touched upon improvements to communication, making sure you repeat requests in your own words so misunderstandings don’t occur due to differing vocabularies. Finally, he suggested that some cross-training happen between roles. A developer may never be able to take over full time for an operator, or vice versa, but walking a mile in someone else’s shoes helps build the awareness and understanding that he stresses is important.

The afternoon keynote was given by Catherine Devlin on Hacking Bureaucracy with 18F. She works for the government in the 18F digital services agency. Their mandate is to work with other federal agencies to improve their digital content, from websites to data delivery. Modeled after a startup, she explained that they try not to over-plan, like many government organizations do and can lead to failure, they want to fail fast and keep iterating. She also said their team has a focus on hiring good people and understanding the needs of the people they serve, rather than focusing on raw technical talent and the tools. Their practices center around an open by default philosophy (see: 18F: Open source policy), so much of their work is open source and can be adopted by other agencies. They also make sure they understand the culture of organizations they work with so that the tools they develop together will actually be used, as well as respecting the domain knowledge of teams they’re working with. Slides from her talk here, and include lots of great links to agency tooling they’ve worked on: https://github.com/catherinedevlin/olf-2016-keynote


Catherine Devlin on 18F

That evening folks gathered in the expo hall to meet and eat! That’s where I caught up with my friends from Computer Reach. This is the non-profit I went to Ghana with back in 2012 to deploy Ubuntu-based desktops. I spent a couple weeks there with Dave, Beth Lynn and Nancy (alas, unable to come to OLF) so it was great to see them again. I learned more about the work they’re continuing to do, having switched to using mostly Xubuntu on new installs which was written about here. On a personal level it was a lot of fun connecting with them too, we really bonded during our adventures over there.


Tyler Lamb, Dave Sevick, Elizabeth K. Joseph, Beth Lynn Eicher

Saturday morning began with a keynote from Ethan Galstad on Becoming the Next Tech Entrepreneur. Ethan is the founder of Nagios, and in his talk he traced some of the history of his work on getting Nagios off the ground as a proper project and company and his belief in why technologists make good founders. In his work he drew from his industry and market expertise from being a technologist and was able to play to the niche he was focused on. He also suggested that folks look to what other founders have done that has been successful, and recommended some books (notably Founders at Work and Work the System). Finaly, he walked through some of what can be done to get started, including the stages of idea development, basic business plan (don’t go crazy), a rough 1.0 release that you can have some early customers test and get feedback from, and then into marketing, documenting and focused product development. He concluded by stressing that open source project leaders are already entrepreneurs and the free users of your software are your initial market.

Next up was Robert Foreman’s Mixed Metaphors: Using Hiera with Foreman where he sketched out the work they’ve done that preserves usage of Hiera’s key-value store system but leverages Foreman for the actual orchestration. The mixing of provisioning and orchestration technologies is becoming more common, but I hadn’t seen this particular mashup.

My talk was A Tour of OpenStack Deployment Scenarios. This is the same talk I gave at FOSSCON back in August, walking the audience through a series of ways that OpenStack could be configured to provide compute instances, metering and two types of storage. For each I gave a live demo using DevStack. I also talked about several other popular components that could be added to a deployment. Slides from my talk are here (PDF), which also link to a text document with instructions for how to run the DevStack demos yourself.


Thanks to Vitaliy Matiyash for taking a picture during my talk! (source)

At lunch I met up with my Ubuntu friends to catch up. We later met at the booth where they had a few Ubuntu phones and tablets that gained a bunch of attention throughout the event. This event was also my first opportunity to meet Unit193 and Svetlana Belkin in person, both of whom I’ve worked with on Ubuntu for years.


Unit193, Svetlana Belkin, José Antonio Rey, Elizabeth K. Joseph and Nathan Handler

After lunch I went over to see David Griggs of Dell give us “A Look Under the Hood of Ohio Supercomputer Center’s Newest Linux Cluster.” Supercomputers are cool and it was interesting to learn about the system it was replacing, the planning that went into the replacement and workload cut-over and see in-progress photos of the installation. From there I saw Ryan Saunders speak on Automating Monitoring with Puppet and Shinken. I wasn’t super familiar with the Shinken monitoring framework, so this talk was an interesting and very applicable demonstration of the benefits.

The last talk I went to before the closing keynotes was from my Computer Reach friends Dave Sevick and Tyler Lamb. They presented their “Island Server” imaging server that’s now being used to image all of the machines that they re-purpose and deploy around the world. With this new imaging server they’re able to image both Mac and Linux PCs from one Macbook Pro rather than having a different imaging server for each. They were also able to do a live demo of a Mac and Linux PC being imaged from the same Island Server at once.


Tyler and Dave with the Island Server in action

The event concluded with a closing keynote by a father and daughter duo, Joe and Lily Born, on The Democratization of Invention. Joe Born first found fame in the 90s when he invented the SkipDoctor CD repair device, and is now the CEO of Aiwa which produces highly rated Bluetooth speakers. Lily Born invented the tip-proof Kangaroo Cup. The pair reflected on their work and how the idea to product in the hands of customers has changed in the past twenty years. While the path to selling SkipDoctor had a very high barrier to entry, globalization, crowd-funding, 3D printers and internet-driven word of mouth and greater access to the press all played a part in the success of Lily’s Kangaroo cup and the new Aiwa Bluetooth speakers. While I have no plans to invent anything any time soon (so much else to do!) it was inspiring to hear how the barriers have been lowered and inventors today have a lot more options. Also, I just bought an Aiwa Exos-9 Bluetooth Speaker, it’s pretty sweet.

My conference adventures concluded with a dinner with my friends José, Nathan and David, all three of whom I also spent time with at FOSSCON in Philadelphia the month before. It was fun getting together again, and we wandered around downtown Columbus until we found a nice little pizzeria. Good times.

More photos from the Ohio LinuxFest here: https://www.flickr.com/photos/pleia2/albums/72157674988712556

on November 30, 2016 06:29 PM

November 29, 2016

As I just posted in the Mission Forum, our KDE Developer Guide needs a new home. Currently it is "not found" where it is supposed to be.

UPDATE: Nicolas found the PDF on archive.org, which does have the photos too. Not as good as the xml, but better than nothing.

We had great luck using markdown files in git for the chapters of the Frameworks Cookbook, so the Devel Guide should be stored and developed in a like manner. I've been reading about Sphinx lately as a way to write documentation, which is another possibility. Kubuntu uses Sphinx for docs.

In any case, I do not have the time or skills to get, restructure and re-place this handy guide for our GSoC students and other new KDE contributors.

This is perhaps suitable for a Google Code-in task, but I would need a mentor who knows markdown or Sphinx to oversee. Contact me if interested! #kde-books or #kde-soc
on November 29, 2016 06:31 AM

I little while back I kicked off a competition to give away a Luma Wifi Set.

The challenge? Share a great community that you feel does wonderful work. The most interesting one, according to yours truly, gets the prize.

Well, I am delighted to share that Garrett Nay bags the prize for sharing the following in his comment:

I don’t know if this counts, since I don’t live in Seattle and can’t be a part of this community, but I’m in a new group in Salt Lake City that’s modeled after it. The group is Story Games Seattle: http://www.meetup.com/Story-Games-Seattle/. They get together on a weekly+ basis to play story games, which are like role-playing games but have a stronger emphasis on giving everyone at the table the power to shape the story (this short video gives a good introduction to what story games are all about, featuring members of the group:

Story Games from Candace Fields on Vimeo.

Story games seem to scratch a creative itch that I have, but it’s usually tough to find friends who are willing to play them, so a group dedicated to them is amazing to me.

Getting started in RPGs and story games is intimidating, but this group is very welcoming to newcomers. The front page says that no experience with role-playing is required, and they insist in their FAQ that you’ll be surprised at what you’ll be able to create with these games even if you’ve never done it before. We’ve tried to take this same approach with our local group.

In addition to playing published games, they also regularly playtest games being developed by members of the group or others. As far as productivity goes, some of the best known story games have come from members of this and sister groups. A few examples I’m aware of are Microscope, Kingdom, Follow, Downfall, and Eden. I’ve personally played Microscope and can say that it is well designed and very polished, definitely a product of years of playtesting.

They’re also productive and engaging in that they keep a record on the forums of all the games they play each week, sometimes including descriptions of the stories they created and how the games went. I find this very useful because I’m always on the lookout for new story games to try out. I kind of wish I lived in Seattle and could join the story games community, but hopefully we can get our fledgling group in Salt Lake up to the standard they have set.

What struck me about this example was that it gets to the heart of what community should be and often is – providing a welcoming, supportive environment for people with like-minded ideas and interests.

While much of my work focuses on the complexities of building collaborative communities with the intricacies of how people work together, we should always remember the huge value of what I refer to as read communities where people simply get together to have fun with each other. Garrett’s example was a perfect summary of a group doing great work here.

Thanks everyone for your suggestions, congratulations to Garrett for winning the prize, and thanks to Luma for providing the prize. Garrett, your Luma will be in the mail soon!

The post Luma Giveaway Winner – Garrett Nay appeared first on Jono Bacon.

on November 29, 2016 12:08 AM

November 28, 2016

Welcome to the Ubuntu Weekly Newsletter. This is issue #489 for the week November 21 – 27, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Chris Guiver
  • Elizabeth K. Joseph
  • David Morfin
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on November 28, 2016 10:37 PM
Rustifying IronFunctions

As mentioned in my previous blog post there is new open-source, lambda compatible, on-premise, language agnostic, server-less compute service called IronFunctions.
Rustifying IronFunctions

While IronFunctions is written in Go. Rust is still very much admired language and it was decided to add support for it in the fn tool.

So now you can use the fn tool to create and publish functions written in rust.

Using rust with functions

The easiest way to create a iron function in rust is via cargo and fn.

Prerequisites

First create an empty rust project as follows:

$ cargo init --name func --bin

Make sure the project name is func and is of type bin. Now just edit your code, a good example is the following "Hello" example:

use std::io;  
use std::io::Read;

fn main() {  
    let mut buffer = String::new();
    let stdin = io::stdin();
    if stdin.lock().read_to_string(&mut buffer).is_ok() {
        println!("Hello {}", buffer.trim());
    }
}

You can find this example code in the repo.

Once done you can create an iron function.

Creating a function

$ fn init --runtime=rust <username>/<funcname>

in my case its fn init --runtime=rust seiflotfy/rustyfunc, which will create the func.yaml file required by functions.

Building the function

$ fn build

Will create a docker image <username>/<funcname> (again in my case seiflotfy/rustyfunc).

Testing

You can run this locally without pushing it to functions yet by running:

$ echo Jon Snow | fn run
Hello Jon Snow  

Publishing

In the directory of your rust code do the following:

$ fn publish -v -f -d ./

This will publish you code to your functions service.

Running it

Now to call it on the functions service:

$ echo Jon Snow | fn call seiflotfy rustyfunc 

which is the equivalent of:

$ curl -X POST -d 'Jon Snow' http://localhost:8080/r/seiflotfy/rustyfunc

Next

In the next post I will be writing a more computation intensive rust function to test/benchmark IronFunctions, so stay tune :D

on November 28, 2016 09:47 PM

November 27, 2016

UbuCon Europe in the retrospective

Sujeevan Vijayakumaran

Last weekend the very first UbuCon Europe took place in Essen, Germany. It was the second UbuCon where I was the head of the organisation team. But this one was the first international UbuCon, which had a few more challenges compared to a national UbuCon. ;)

This blog posts focuses on both: the event itself and some information about the organisation.

Thursday

The first unofficial day of the UbuCon was Thursday, where some people already arrived from different countries. We were already ten people from five different countries and we visited the Christmas market in Essen, which opened on that day. Gladly we had Nathan Haines with us, so he could translate all the alcoholic drinks from German to English, because I don't know anything about that. ;)

Friday

The first official day started in the afternoon with a guided tour through Zeche Zollverein. We were 18 people, this time from eight different countries. The tour showed us the history of the local area with the coal mines which were active in the past. They showed us the whole production line from the coal mining to the processing. The tour took two hours and after that we went to the Unperfekthaus, where the first social event of the weekend took place. There, we were roughly fifty persons mostly drinking, eating and talking.

It was also the first chance to see familiar and new faces again!

Saturday

Saturday started with my quick introduction to the event. After that Canonical CEO Jane Silber hold the first keynote where she talked mostly about the IoT and the Cloud. I was glad that she followed my invitation, even though she had to leave after lunch. The day was packed with different talks and workshops.

I sadly couldn't join every talk but the talks from Microsoft about "Bash on Ubuntu on Windows" was quite interesting. Laura Czajkowskis talk about "Supporting Inclusion & Involvement in a Remote Distributed Team" was short but also interesting. The day ended with the raffle and the UbuCon Quiz. Everyone could buy an unlimited amount of raffle ticket for 1€ so there were a few people with more than ten tickets. We mostly had different Ubuntu USB-Sticks, three Ubuntu Books, Microsoft T-Shirts, a Nextcloud Box and the bq Aquaris M10 Tablet which were pretty popular. Funnily some people won more than one prize. The UbuCon Quiz afterwards was funny too. The ultimate answer to every question seemed to be "Midnight Commander" :). After the quiz the second social event started and was joined by about 80 persons.

Sunday

After the long Saturday the started again at around 10 o'clock in the morning. There were different talks and workshops again. Daniel Holbach did a workshop on how to create snaps, Costales did a talk about his navigation app uNav. Later Alan Pope talked about how to bring an app as a snap to the store. Elizabeth K. Joseph was talking on how to build a career with Ubuntu and FOSS and Olivier Paroz talked about Nextcloud and the upcoming features.

The day and also the conference ended on 5pm. At that time many people were already on their way back home.

Conclusion

We've welcomed 130 persons from 17 different countries and three continents. Originally I didn't expect that many people from other countries. In the end there were 55 % attendees from Germany. In the last year we had a similar amount of people who attended the German UbuCon. Personally I'm pretty happy that the event took place without big issues or problems. The biggest problem was just the payment which was rather complicated for most of the people. It was a good decision to use the Unperfekthaus as a venue for our event. We didn't have to organise food and drinks, because that was already included. The projectors were already setted up and even the WiFi worked without problems. The mix of the talks were good too: We had different levels of talks, for beginners and for advanced users and developers.

At this place I want to thank to a lot of people. First of all to Canonicals Community-Team including David Planella, Michael Hall, Daniel Holbach and Alan Pope who helped us with the overall organisation and where always ready when we needed help. Also, thanks to Marius Quabeck and Ilonka O. who joined the weekly hangouts with the Community-Team and helped in a lot of smaller and bigger organisation stuff, too. Jonathan Liebers and Jens Holschbach actually brought the UbuCon to Essen, even though the Unperfekthaus wasn't the first choice. Ilonka and Veit Jahns also helped with the handling of all the submitted talks and workshops. Sarah, Peter and Philipp were on the wrong place at the wrong time and got recruited to handle the registration desk: Thanks and Sorry ;)! Last but not the least Torsten Franz and Thoralf Schilde from the ubuntu Deutschland e.V. who were our legal entity to host the UbuCon and handle all the bills.

Also: Never forget the Sponsors: Microsoft, otris software AG, Nextcloud, bytemine, b1 systems, ubuntu-fr and Ubuntu User.

Besides the help in the organisation I also want to thank every speaker and visitor who actually formed the content of the conference. I'm really glad that so many people said that they liked it and I'm really looking forward for next years UbuCon Europe which will take place in Paris, France!

See you there!

on November 27, 2016 11:30 AM

November 26, 2016

stress-ng is a tool that I have been developing on-and-off for a few years. It is designed to stress kernels to force out bugs, stress CPU and memory and also contains some performance benchmarking metrics too.

stress-ng is now entering the maturity part of the development phase, however, there is always scope to add new stressors and generally improve the tool.   I've just released version 0.07.07 for the Ubuntu Zesty 17.04 release and it contains a few additional features:
  • SIGUSR2 sent to stress-ng will dump out the current system load and memory statistics
  • Sched policy stress tests for different scheduler configurations
  • Add a missing --sockfd-port option
And various bug fixes:
  • Fixed up some minor memory leaks
  • Missing counter stats on bind-mount, fp-error, personality and resources stressors
  • Fix the --fiemap-bytes option
  • Fix up build warnings with various compilers and static analyzers
The major change to stress-ng over the past month was an internal re-working of system call and GNU features to abstract these into a shim layer to reduce the number build conditional #ifdef paths around code. This simplifies portability, so the code now builds more easily across a range of systems and with various versions of gcc and clang and fixes some issues on older kernels too.   This makes the code also faster to statically analyze with cppcheck.

For more details, visit the stress-ng project page or the quick help guide.
on November 26, 2016 10:35 AM

November 25, 2016

We just released the first beta of APT 1.4 to Debian unstable (beta here means that we don’t know any other big stuff to add to it, but are still open to further extensions). This is the release series that will be released with Debian stretch, Ubuntu zesty, and possibly Ubuntu zesty+1 (if the Debian freeze takes a very long time, even zesty+2 is possible). It should reach the master archive in a few hours, and your mirrors shortly after that.

Security changes

APT 1.4 by default disables support for repositories signed with SHA1 keys. I announced back in January that it was my intention to do this during the summer for development releases, but I only remembered the Jan 1st deadline for stable releases supporting that (APT 1.2 and 1.3), so better late than never.

Around January 1st, the same or a similar change will occur in the APT 1.2 and 1.3 series in Ubuntu 16.04 and 16.10 (subject to approval by Ubuntu’s release team). This should mean that repository provides had about one year to fix their repositories, and more than 8 months since the release of 16.04. I believe that 8 months is a reasonable time frame to upgrade a repository signing key, and hope that providers who have not updated their repositories yet will do so as soon as possible.

Performance work

APT 1.4 provides a 10-20% performance increase in cache generation (and according to callgrind, we went from approx 6.8 billion to 5.3 billion instructions for my laptop’s configuration, a reduction of more than 21%). The major improvements are:

We switched the parsing of Deb822 files (such as Packages files) to my perfect hash function TrieHash. TrieHash – which generates C code from a set of words – is about equal or twice as fast as the previously used hash function (and two to three times faster than gperf), and we save an additional 50% of that time as we only have to hash once during parsing now, instead of during look up as well. APT 1.4 marks the first time TrieHash is used in any software. I hope that it will spread to dpkg and other software at a later point in time.vendors.

Another important change was to drop normalization of Description-MD5 values, the fields mapping a description in a Packages files to a translated description. We used to parse the hex digits into a native binary stream, and then compared it back to hex digits for comparisons, which cost us about 5% of the run time performance.

We also optimized one of our hash functions – the VersionHash that hashes the important fields of a package to recognize packages with the same version, but different content – to not normalize data to a temporary buffer anymore. This buffer has been the subject of some bugs (overflow, incompleteness) in the recent past, and also caused some slowdown due to the additional writes to the stack. Instead, we now pass the bytes we are interested in directly to our CRC code, one byte at a time.

There were also some other micro-optimisations: For example, the hash tables in the cache used to be ordered by standard compare (alphabetical followed by shortest). It is now ordered by size first, meaning we can avoid data comparisons for strings of different lengths. We also got rid of a std::string that cannot use short string optimisation in a hot path of the code. Finally, we also converted our case-insensitive djb hashes to not use a normal tolower_ascii(), but introduced tolower_ascii_unsafe() which just sets the “lowercase bit” (| 0x20) in the character.

Others

  • Sandboxing now removes some environment variables like TMP from the environment.
  • Several improvements to installation ordering.
  • Support for armored GPG keys in trusted.gpg.d.
  • Various other fixes

For a more complete overview of all changes, consult the changelog.


Filed under: Debian, Ubuntu
on November 25, 2016 11:43 PM

Ubuntu

Android Studio is a great development environment and is available on Ubuntu. I’m using Ubuntu Mate 16.10 “Yakkety Yak”.
 
First install a Java Development Kit (JDK). OpenJDK is pre-installed or you can use Oracle Java 8 (there is a great guide here). I don’t wish to argue over your choice – I need to use the latter (my tutor does). Download Android Studio here. – I extracted it to /opt; ran the installer; and used my home folder for the SDK. If you are using 64 bit, you need the 32 bit GNU standard C++ library:
sudo apt install lib32stdc++6

Virtualisation support is interesting. I read two tutorial and Google’s guide. The former makes reference to command line options not in version 2.2.2. These posts suggest this is a bug, but it may now be default behaviour. First enable that virtualisation in BIOS (check if enabled using “kvm-ok”).

sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils
sudo adduser dougie kvm
sudo adduser dougie libvirtd

This results in an error.
screenshot-at-2016-11-25-21-16-19

Using the system version of libstdc++.so.6 works. Add the following to /etc/environment:

ANDROID_EMULATOR_USE_SYSTEM_LIBS=1

It seems snappy but with no feedback I’m unsure if accelerated.

So I now have a development environment set up for my project. The next hurdle is to choose a title. So far it is a: development project; distributed application; and uses Android.

on November 25, 2016 10:16 PM

Ansible

LXD (Working example from this post you can find on my GitHub Page)

While working with Ansible since a couple of years now and working with LXD as my local test environment I was waiting for a simple solution to create LXD containers (locally and remote) with Ansible from scratch. Not using any helper methods like shell: lxd etc.

So, since Ansible 2.2 we have native LXD support.
Furthermore, the Ansible Team actually showed some respect to the Python3 Community, and has implemented Python3 Support.

Preparations

First of all, you need to have the latest Ansible Release), or install it in a Python3 Virtual Environment via pip install ansible.

Create your Ansible directory layout

To make your life later a little bit easier, create your Ansible directory structure and turn it to a Git repository.

user@home: ~> mkdir -p ~/Projects/git.ansible/lxd-containers  
user@home: ~> cd ~/Projects/git.ansible/lxd-containers  
user@home: ~/Projects/git.ansible/lxd-containers> mkdir -p {inventory,roles,playbooks}

Create your inventory file

Imagine, you want to create 5 new LXD containers. You can create 5 playbooks to do it, or you can be smart, and let Ansible do it for you.
Working with inventory files is easy, it's simply a file with an INI file structure.

Let's create an inventory file for new LXD containers in ~/Projects/git.ansible/lxd-containers/inventory/containers:

[local]
localhost

[containers]
blog-01 ansible_connection=lxd  
blog-02 ansible_connection=lxd  
blog-03 ansible_connection=lxd  
blog-04 ansible_connection=lxd  
blog-05 ansible_connection=lxd  

We defined now 5 containers.

Create a playbook for running Ansible

We need now an Ansible playbook.

A playbook is just a simple YAML file. You can edit this file with your editor of choice. I personally like Sublime Text 3 or GitHubs Atom, but any other editor (like Vim or Emacs) will do.

Create a new file under ~/Projects/git.ansible/lxd-containers/playbooks/lxd_create_containers.yml:

- hosts: localhost
  connection: local
  roles:
    - create_lxd_containers

Let's go shortly through this:

  • hosts: defines: the hosts to run Ansible on. Using it like this means, this playbook runs on your local machine.
  • connection: local: Ansible will use a local connection, like sshing into your local box.
  • roles: ...: is a list of Ansible roles to be used during this playbook.

You could also write all Ansible tasks in this playbook, but as you want to reuse several tasks for certain workloads, it's a better idea to divide them into roles.

Create the the Ansible role

Ansible Roles are being used for separating repeating tasks from the playbooks.

Think about this example: You have a playbook for all your webservers like this:

- hosts: webservers
  tasks:
    - name: apt update
      apt: update_cache=yes

and you have a playbook for all your database servers like this:

- hosts: databases
  tasks:
    - name: apt update
      apt: update_cache=yes

What do you see? Yes, two times the same task, namely "apt update".

To make our lives easier, instead of writing in every playbook a task to update the systems package archive cache, we create an Ansible role.

Ansible Roles do have a special directory structure, I advise to read the good documention over at the Ansible HQ

Let's start with our role for creating LXD containers:

Create the directory structure

user@home: ~> cd ~/Projects/git.ansible/lxd-containers/roles/  
user@home: ~/Projects/git.ansible/lxd-containers/roles/> mkdir -p create_lxd_containers/tasks  

Now create a new YAML file and name it ~/Projects/git.ansible/lxd-containers/roles/create_lxd_containers/tasks/main.yml with this content:

- name: Create LXD Container
  connection: local
  become: false
  lxd_container:
    name: "{{item}}"
    state: started
    source:
      type: image
      mode: pull
      server: https://cloud-images.ubuntu.com/releases
      protocol: simplestreams
      alias: 16.04/amd64
    profiles: ['default']
    wait_for_ipv4_addresses: true
    timeout: 600
  with_items:
    - "{{groups['containers']}}"

- name: Check if Python2 is installed in container
  delegate_to: "{{item}}"
  raw: dpkg -s python
  register: python_check_is_installed
  failed_when: python_check_is_installed.rc not in [0,1]
  changed_when: false
  with_items:
    - "{{groups['containers']}}"

- name: Install Python2 in container
  delegate_to: "{{item.item}}"
  raw: apt-get update && apt-get install -y python
  when: "{{item.rc == 1}}"
  with_items:
    - "{{python_check_is_installed.results}}"

Let's go through the different tasks

Create the LXD Container

- name: Create LXD Container
  connection: local
  become: false
  lxd_container:
    name: "{{item}}"
    state: started
    source:
      type: image
      mode: pull
      server: https://cloud-images.ubuntu.com/releases
      protocol: simplestreams
      alias: 16.04/amd64
    profiles: ['default']
    wait_for_ipv4_addresses: true
    timeout: 600
  with_items:
    - "{{groups['containers']}}"
  • connection: local: means it's only running on your local machine.
  • become: false: don't use su or sudo to become a superuser.
  • lxd_container: ...: this is the Ansible LXD module definition. Read the documentation about this module here: Ansible LXD Documentation
  • with_items: ...: this is one of the many Ansible loop statements. In this case, we are looping over the Inventory Group 'containers' (which we defined in the inventory file earlier).

The "{{item}}" will be prefilled by the loop from with_items:..., again a hint to read the good documentation of Ansible about loops.

Check if Python2 is installed inside the container

- name: Check if Python2 is installed in container
  delegate_to: "{{item}}"
  raw: dpkg -s python
  register: python_check_is_installed
  failed_when: python_check_is_installed.rc not in [0,1]
  changed_when: false
  with_items:
    - "{{groups['containers']}}"
  • delegate_to:...": this key tells ansible to not use the default connection anymore, but to delegate the connection and the work to the host mentioned in delegate_to.
  • raw:...: This key advises Ansible to use the raw module. Raw means, we don't actually have anything running, no Python for example, which we need for Ansible. So it just using an SSH connection (by default) or for now, it's using a local LXD connection (like lxc exec <container-name> -- <command>). In this case we are executing dpkg -s python, we want to find out of if Python2 is installed.
  • register: ...: during execution of the raw: ... command, Ansible is able to catch the output (stdout, stderr) and the result code of the raw: ... command. register: ... will define a "variable" to store this result. Normally this "variable" is a Python/JSON dictionary for a particular host, but as we are iterating through the 'containers' inventory group, this 'variable' has a results array (which we will use in the next task), where Ansible stores all outputs of all hosts checks. During the task execution but, this 'variable' is still usable as a single result set.
  • failed_when: ...: this will stop the task, if the registered 'variable' is not accessible or the return code is not 0 or 1 (so command returned no success or no real fail, but something else). (more documentation you can find here)
  • changed_when: false: so whenever this tasks runs, it will always change it status, and this would mean Ansible would report one change (i.e. return code changed). To prevent this, we set this to false.(more documentation you can find here)
  • with_items: ...: this is one of the many Ansible loop statements. In this case, we are looping over the Inventory Group 'containers' (which we defined in the inventory file earlier).

The "{{item}}" will be prefilled by the loop from with_items:..., again a hint to read the good documentation of Ansible about loops.

Install Python2 if it is not installed in the container

- name: Install Python2 in container
  delegate_to: "{{item.item}}"
  raw: apt-get update && apt-get install -y python
  when: "{{item.rc == 1}}"
  with_items:
    - "{{python_check_is_installed.results}}"
  • delegate_to:...": this key tells ansible to not use the default connection anymore, but to delegate the connection and the work to the host mentioned in delegate_to.
  • raw:...: This key advises Ansible to use the raw module. Raw means, we don't actually have anything running, no Python for example, which we need for Ansible. So it just using an SSH connection (by default) or for now, it's using a local LXD connection (like lxc exec <container-name> -- <command>). In this case we are executing dpkg -s python, we want to find out of if Python2 is installed.
  • when: ...: this is a conditional. It says, that this task only executes when the codition is met. In this case when the return code equals to 1. This is true when the Python2 install check returned, that Python2 was not installed.
  • with_items: ...: this is one of the many Ansible loop statements. In this case, we are looping over the Inventory Group 'containers' (which we defined in the inventory file earlier).

The "{{item}}" will be prefilled by the loop from with_items:..., again a hint to read the good documentation of Ansible about loops. In this case, we are looping through the result sets of the Python2 install check and the collected results in the 'variable' python_check_is_installed.

Some more informations

In the playbook and in the first task (create LXD containers) we used the a local connection, which means nothing else than Ansible should work on your local workstation.
Inside the Inventory INI file there is this key/value pair: ansible_connection=lxd.

So when the two other tasks who were delegated to the created containers, Ansible would normally use an SSH connection attempt (when you remove the ansible_connection=lxd). With this special configuration in the Inventory INI file it won't try to use SSH towards the containers, but the local LXD connection.

Bringing it all together

Let's start Ansible to do the work we want it to do:

~/Projects/git.ansible/lxd-containers > ansible-playbook -i inventory/inventory playbooks/lxd_create_containers.yml 

PLAY [localhost] ***************************************************************

TASK [setup] *******************************************************************  
ok: [localhost]

TASK [create_lxd_containers : Create LXD Container] ****************************  
changed: [localhost] => (item=blog-01)  
changed: [localhost] => (item=blog-02)  
changed: [localhost] => (item=blog-03)  
changed: [localhost] => (item=blog-04)  
changed: [localhost] => (item=blog-05)

TASK [create_lxd_containers : Check if Python2 is installed in container] ******  
ok: [localhost -> blog-01] => (item=blog-01)  
ok: [localhost -> blog-02] => (item=blog-02)  
ok: [localhost -> blog-03] => (item=blog-03)  
ok: [localhost -> blog-04] => (item=blog-04)  
ok: [localhost -> blog-05] => (item=blog-05)

TASK [create_lxd_containers : Install Python2 in container] ********************  
changed: [localhost -> blog-01] => (item={'changed': False, 'stdout': u'', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': u'blog-01'}, '_ansible_item_result': True, 'failed': False, 'item': u'blog-01', 'rc': 1, 'invocation': {'module_name': u'raw', 'module_args': {u'_raw_params': u'dpkg -s python'}}, 'stdout_lines': [], 'failed_when_result': False, 'stderr': u"dpkg-query: package 'python' is not installed and no information is available\nUse dpkg --info (= dpkg-deb --info) to examine archive files,\nand dpkg --contents (= dpkg-deb --contents) to list their contents.\n"})  
changed: [localhost -> blog-02] => (item={'changed': False, 'stdout': u'', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': u'blog-02'}, '_ansible_item_result': True, 'failed': False, 'item': u'blog-02', 'rc': 1, 'invocation': {'module_name': u'raw', 'module_args': {u'_raw_params': u'dpkg -s python'}}, 'stdout_lines': [], 'failed_when_result': False, 'stderr': u"dpkg-query: package 'python' is not installed and no information is available\nUse dpkg --info (= dpkg-deb --info) to examine archive files,\nand dpkg --contents (= dpkg-deb --contents) to list their contents.\n"})  
changed: [localhost -> blog-03] => (item={'changed': False, 'stdout': u'', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': u'blog-03'}, '_ansible_item_result': True, 'failed': False, 'item': u'blog-03', 'rc': 1, 'invocation': {'module_name': u'raw', 'module_args': {u'_raw_params': u'dpkg -s python'}}, 'stdout_lines': [], 'failed_when_result': False, 'stderr': u"dpkg-query: package 'python' is not installed and no information is available\nUse dpkg --info (= dpkg-deb --info) to examine archive files,\nand dpkg --contents (= dpkg-deb --contents) to list their contents.\n"})  
changed: [localhost -> blog-04] => (item={'changed': False, 'stdout': u'', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': u'blog-04'}, '_ansible_item_result': True, 'failed': False, 'item': u'blog-04', 'rc': 1, 'invocation': {'module_name': u'raw', 'module_args': {u'_raw_params': u'dpkg -s python'}}, 'stdout_lines': [], 'failed_when_result': False, 'stderr': u"dpkg-query: package 'python' is not installed and no information is available\nUse dpkg --info (= dpkg-deb --info) to examine archive files,\nand dpkg --contents (= dpkg-deb --contents) to list their contents.\n"})  
changed: [localhost -> blog-05] => (item={'changed': False, 'stdout': u'', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': u'blog-05'}, '_ansible_item_result': True, 'failed': False, 'item': u'blog-05', 'rc': 1, 'invocation': {'module_name': u'raw', 'module_args': {u'_raw_params': u'dpkg -s python'}}, 'stdout_lines': [], 'failed_when_result': False, 'stderr': u"dpkg-query: package 'python' is not installed and no information is available\nUse dpkg --info (= dpkg-deb --info) to examine archive files,\nand dpkg --contents (= dpkg-deb --contents) to list their contents.\n"})

PLAY RECAP *********************************************************************  
localhost                  : ok=4    changed=2    unreachable=0    failed=0   

~/Projects/git.ansible/lxd-containers > lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| blog-01 | RUNNING | 10.139.197.44 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| blog-02 | RUNNING | 10.139.197.10 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| blog-03 | RUNNING | 10.139.197.188 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| blog-04 | RUNNING | 10.139.197.221 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| blog-05 | RUNNING | 10.139.197.237 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+

Awesome, 5 containers created and Python2 installed.

Now it's time do to the real work (like installing your app and testing them)

on November 25, 2016 01:50 PM

November 24, 2016

This is part 3, the older parts can be found here: part 1 and part 2

And again it took quite a while to write a new update about my experiments with writing GStreamer elements in Rust. The previous articles can be found here and here. Since last time, there was also the GStreamer Conference 2016 in Berlin, where I had a short presentation about this.

Progress was rather slow unfortunately, due to work and other things getting into the way. Let’s hope this improves. Anyway!

There will be three parts again, and especially the last one would be something where I could use some suggestions from more experienced Rust developers about how to solve state handling / state machines in a nicer way. The first part will be about parsing data in general, especially from untrusted sources. The second part will be about my experimental and current proof of concept FLV demuxer.

Parsing Data

Safety?

First of all, you probably all saw a couple of CVEs about security relevant bugs in (rather uncommon) GStreamer elements going around. While all of them would’ve been prevented by having the code written in Rust (due to by-default array bounds checking), that’s not going to be our topic here. They also would’ve been prevented by using various GStreamer helper API, like GstByteReader, GstByteWriter and GstBitReader. So just use those, really. Especially in new code (which is exactly the problem with the code affected by the CVEs, it was old and forgotten). Don’t do an accountant’s job, counting how much money/many bytes you have left to read.

But yes, this is something where Rust will also provide an advantage by having by-default safety features. It’s not going to solve all our problems, but at least some classes of problems. And sure, you can write safe C code if you’re careful but I’m sure you also drive with a seatbelt although you can drive safely. To quote Federico about his motivation for rewriting (parts of) librsvg in Rust:

Every once in a while someone discovers a bug in librsvg that makes it all the way to a CVE security advisory, and it’s all due to using C. We’ve gotten double free()s, wrong casts, and out-of-bounds memory accesses. Recently someone did fuzz-testing with some really pathological SVGs, and found interesting explosions in the library. That’s the kind of 1970s bullshit that Rust prevents.

You can directly replace the word librsvg with GStreamer here.

Ergonomics

The other aspect with parsing data is that it’s usually a very boring aspect of programming. It should be as painless as possible, as easy as possible to do it in a safe way, and after having written your 100th parser by hand you probably don’t want to do that again. Parser combinator libraries like Parsec in Haskell provide a nice alternative. You essentially write down something very close to a formal grammar of the format you want to parse, and out of this comes a parser for the format. Other than parser generators like good, old yacc, everything is written in target language though, and there is no separate code generation step.

Rust, being quite a bit more expressive than C, also made people write parser generator libraries. They are all not as ergonomic (yet?) as in Haskell, but still a big improvement over anything else. There’s nom, combine and chomp. All having a slightly different approach. Choose your favorite. I decided on nom for the time being.

A FLV Demuxer in Rust

For implementing a demuxer, I decided on using the FLV container format. Mostly because it is super-simple compared to e.g. MP4 and WebM, but also because Geoffroy, the author of nom, wrote a simple header parsing library for it already and a prototype demuxer using it for VLC. I’ll have to extend that library for various features in the near future though, if the demuxer should ever become feature-equivalent with the existing one in GStreamer.

As usual, the code can be found here, in the “demuxer” branch. The most relevant files are rsdemuxer.rs and flvdemux.rs.

Following the style of the sources and sinks, the first is some kind of base class / trait for writing arbitrary demuxers in Rust. It’s rather unfinished at this point though, just enough to get something running. All the FLV specific code is in the second file, and it’s also very minimal for now. All it can do is to play one specific file (or hopefully all other files with the same audio/video codec combination).

As part of all this, I also wrote bindings for GStreamer’s buffer abstraction and a Rust-rewrite of the GstAdapter helper type. Both showed Rust’s strengths quite well, the buffer bindings by being able to express various concepts of the buffers in a compiler-checked, safe way in Rust (e.g. ownership, reability/writability), the adapter implementation by being so much shorter (it’s missing features… but still).

So here we are, this can already play one specific file (at least) in any GStreamer based playback application. But some further work is necessary, for which I hopefully have some time in the near future. Various important features are still missing (e.g. other codecs, metadata extraction and seeking), the code is rather proof-of-concept style (stringly-typed media formats, lots of unimplemented!() and .unwrap() calls). But it shows that writing media handling elements in Rust is definitely feasible, and generally seems like a good idea.

If only we had Rust already when all this media handling code in GStreamer was written!

State Handling

Another reason why all this took a bit longer than expected, is that I experimented a bit with expressing the state of the demuxer in a more clever way than what we usually do in C. If you take a look at the GstFlvDemux struct definition in C, it contains about 100 lines of field declarations. Most of them are only valid / useful in specific states that the demuxer is in. Doing the same in Rust would of course also be possible (and rather straightforward), but I wanted to try to do something better, especially by making invalid states unrepresentable.

Rust has this great concept of enums, also known as tagged unions or sum types in other languages. These are not to be confused with C enums or unions, but instead allow multiple variants (like C enums) with fields of various types (like C unions). But all of that in a type-safe way. This seems like the perfect tool for representing complicated state and building a state machine around it.

So much for the theory. Unfortunately, I’m not too happy with the current state of things. It is like this mostly because of Rust’s ownership system getting into my way (rightfully, how would it have known additional constraints I didn’t know how to express?).

Common Parts

The first problem I ran into, was that many of the states have common fields, e.g.

enum State {
    ...
    NeedHeader,
    HaveHeader {header: Header, to_skip: usize },
    Streaming {header: Header, audio: ... },
    ...
}

When writing code that matches on this, and that tries to move from one state to another, these common fields would have to be moved. But unfortunately they are (usually) borrowed by the code already and thus can’t be moved to the new variant. E.g. the following fails to compile

match self.state {
        ...
        State::HaveHeader {header, to_skip: 0 } => {
            
            self.state = State::Streaming {header: header, ...};
        },
    }

A Tree of States

Repeating the common parts is not nice anyway, so I went with a different solution by creating a tree of states:

enum State {
    ...
    NeedHeader,
    HaveHeader {header: Header, have_header_state: HaveHeaderState },
    ...
}

enum HaveHeaderState {
    Skipping {to_skip: usize },
    Streaming {audio: ... },
}

Apart from making it difficult to find names for all of these, and having relatively deeply nested code, this works

match self.state {
        ...
        State::HaveHeader {ref header, ref mut have_header_state } => {
            match *have_header_state {
                HaveHeaderState::Skipping { to_skip: 0 } => {
                    *have_header = HaveHeaderState::Streaming { audio: ...};
                }
        },
    }

If you look at the code however, this causes the code to be much bigger than needed and I’m also not sure yet how it will be possible nicely to move “backwards” one state if that situation ever appears. Also there is still the previous problem, although less often: if I would match on to_skip here by reference (or it was no Copy type), the compiler would prevent me from overwriting have_header for the same reasons as before.

So my question for the end: How are others solving this problem? How do you express your states and write the functions around them to modify the states?

Update

I actually implemented the state handling as a State -> State function before (and forgot about that), which seems conceptually the right thing to do. It however has a couple of other problems. Thanks for the suggestions so far, it seems like I’m not alone with this problem at least.

Update 2

I’ve went a bit closer to the C-style struct definition now, as it makes the code less convoluted and allows me to just get forwards with the code. The current status can be seen here now, which also involves further refactoring (and e.g. some support for metadata).

on November 24, 2016 11:10 PM

Gwenview Importer is back

Aurélien Gâteau

I spent some time over the last weeks to port Gwenview Importer to KDE Frameworks 5, as I was getting frustrated with importing pictures by hand. It's a straight port: no new features.

Here is a screenshot after I filled my SD Card with random pictures of my daughter and cat for the purpose of illustrating this blog post :)

Gwenview Importer

I missed the KDE Applications 16.12 deadline, but the code is in Gwenview master now, so Gwenview Importer should be in the next KDE Applications release.

on November 24, 2016 06:50 AM