I have paired a lot over the past week (which makes me tremendously happy!) and I noticed how lost I can get in someone else’s terminal if I don’t have my regular bash prompt - not only the cute little heart that welcomes me every time I run the terminal, but also the git branch name I’m currently on.

As I couldn’t find the tutorial I used to learn how to parse the git branch, I decided to go through my .bashrc and write one myself.

Bash prompt variables

Bash prompt variables hold the text that shows as the terminal prompt in various contexts. Their values are examined and executed just before Bash prints them. There are several of them, but for now I’m going to focus on PS1, which is the primary prompt string.

In a bash prompt, you can use:

  • special characters that can be used for current user name (\u), time (\t), host name (\h) etc.
  • colors both for the foreground and the background
  • unicode characters (letters, numbers, symbols, emoji)
  • custom variables

Let’s look at an example value of PS1: \s-\v\$

  • \s the name of the shell
  • - just a dash
  • \v the version of the shell
  • $ a dollar sign at the end of the prompt

which in my terminal evaluates to: -bash-3.2$

To change the value of PS1 for the current session, you can use export PS1="<value>" in the terminal:

  • adding a lambda
    export PS1="λ "
  • changing the lambda’s color to red
    export PS1="\[\033[0;31m\]λ \[\033[0m\]"
  • adding the current directory before the red lambda
    export PS1="\w\[\033[0;31m\] λ \[\033[0m\]"

and so on.

If you want to make the PS1 more permanent, you can add it to your .bashrc (or .bash_profile, if you’re on a Mac), reload your session with

source .bashrc

et voilà!

Getting information from git

When I work on code, I usually want to know a few things:

  • am I using git? (if not, I might want to git init)
  • what branch I am currently on?
  • are there any changes in the directory - untracked, unadded, added or deleted files?

All of those things I can check by simply using the git status command, but it gets tedious to do that every time I need to change something (am I on the right branch? did I add everything? did I switch to a feature branch?). I can also parse git commands output with Bash and add the parsed information to my prompt, using custom variables.


In order to get the current branch, you can parse the output of several different git commands. I don’t think there is one good way to do it, as you might want to get slightly different results depending on the context, but some good ideas are:

  • git rev-parse --abbrev-ref HEAD - which gives you the right name of the local branch and consistent HEAD for other contexts
  • git branch | sed -n '/\* /s///p' - which gives you the right name of the local branch and appropriate outputs for other contexts

Whichever method you choose, to add it to your PS1 prompt you can:

  • add a function for parsing the branch (the 2>/dev/null stops the command from printing the output):
    git_prompt() {
      local ref=$(git branch 2>/dev/null | sed -n '/\* /s///p')
      if [ "$ref" != "" ]
        echo "($ref) "
  • add the function output to your prompt
    export PS1="\w\[\033[0;31m\] λ $(git_prompt)\[\033[0m\]"

    or, with a hint of color

    export PS1="\w\[\033[0;31m\] λ \[\033[0;95m\]\$(git_prompt)\[\033[0m\]\[\033[0m\]"

Number of various changes

You can run git status with various options - if you use the short format, you will get the statuses of your paths that are a bit easier to understand for parsing purposes, notably:

  • ? for untracked paths
  • M for modified paths
  • A for added paths
  • D for deleted paths

To parse untracked paths you can:

  • grep git status --porcelain for ? and count the number of lines in the output (as each untracked path is on a new line)
    untracked=`expr $(git status --porcelain 2>/dev/null | grep "?" | wc -l)`
  • wrap it in a function that returns the number of untracked paths with a symbol of your choice when it’s not 0
    function parse_untracked {
      local untracked=`expr $(git status --porcelain 2>/dev/null | grep "?" | wc -l)`
      if [ "$untracked" != "0" ]
        echo " ?$untracked"
  • add it to your PS1 prompt
    git_prompt() {
      local ref=$(git branch 2>/dev/null | sed -n '/\* /s///p')
      if [ "$ref" != "" ]
        echo "($ref)$(parse_untracked) "

You can add a separate function for each status and add them to your prompt by simply grepping for the appropriate symbol and wrapping it in a function. For reference, my full git parsing code is in this gist.

Have fun adjusting your prompt and hopefully never getting lost among your git branches!

Alicja Raszkowska | Alicja Raszkowska | 2018-01-17 20:41:00

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks.

Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship.

But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect.

So I had to ask what could I learn from this and how could I keep going and working on this project?

Coincidence or not, I was wondering that when I crossed paths (again) with one of the most amazing TED Talks there is:

Reshma Saujani's "Teach girls bravery, not perfection"

And yes, that could be me. Even though I had written down almost every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong intern.

What was I trying to do?

As I talked about in my previous post, the EventCalendar macro seemed like a good place to start doing some work. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki.

How far did I go?

As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes.

EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs:

Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them.

What did go wrong?

But the thing is... even if I had studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to work. Standard macros, that come with MoinMoin, work perfectly. But macros from MacroMarket, I couldn't find a way to make them work.

For the EventCalendar macro, I tried my best to follow the instructions on the Instalation Guide but I simply couldn't find a way for it to be processed.

Things I did:

  • I downloaded the macro file and renamed it to EventCalendar.py
  • I put it in the local macro directory (yourwiki/data/plugins/macro) and proceeded with the rest of the instructions.
  • When that didn't work, I copied the file to the global macro directory (MoinMoin/macro), it wasn't enough.
  • I made sure to add the .css to all styles, both for common.css and screen.css, still didn't work.
  • I thought that maybe it was the arguments on the macro, so I tried to add it to the wiki page in the following ways:




Still, the macro wasn't processed and appeared just like that on the page, even though I had already created pages with that category and added event info to them.

To investigate, I tried using other macros:

These all came with the MoinMoin core and they all worked.

I tried other ones:

That, just like EventCalendar, didn't work.

Going through these macros also made me realize how awfully documented most of them usually are, in particular about the instalation and making it work with the whole system, even if the code is clear. (And to think that at the beginning of this whole thing I had to search and read up on what are DocStrings because the MoinMoin Coding Style says: "That does NOT mean that there should be no docstrings.". Now it seems like some developers didn't know what DocStrings were either.)

I checked permissions, but it couldn't be that, because the downloaded macros has the same permissions as the other macros and they all belong to the same user.

I thought that maybe it was a problem with Python versions or even with the way MoinMoin instalation was done. So I tried some alternatives. First, I tried to install it again on a new CodeAnywhere Ubuntu container, but I still had the same problem.

I tried with a local Debian instalation... same problem. Even though Ubuntu is based on Debian, the fact that macros didn't work on either was telling me that the problem wasn't necessarily the distribution, that it didn't matter which packages or libraries each of them come with. The problem seemed to be somewhere else.

Then, I proceeded to analyze the Apache error log to see if I could figure out.

[Thu Jan 11 00:33:28.230387 2018] [wsgi:error] [pid 5845:tid 139862907651840] [remote ::1:43998] 2018-01-11 00:33:28,229 WARNING MoinMoin.log:112 /usr/local/lib/python2.7/dist-packages/MoinMoin/support/werkzeug/filesystem.py:63: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968'

[Thu Jan 11 00:34:11.089031 2018] [wsgi:error] [pid 5840:tid 139862941255424] [remote ::1:44010] 2018-01-11 00:34:11,088 INFO MoinMoin.config.multiconfig:127 using wiki config: /usr/local/share/moin/wikiconfig.pyc

Alright, the wikiconfig.py wasn't actually set to utf-8, my bad. I fixed and re-read it again to make sure I hadn't missed anything this time. I restarted the server and... nope, macros still don't work.

So, misconfigured UNIX filesystem? Not quite sure what was that, but I searched for it and it seemed to be easily solved generating an en_US.UTF-8 Locale and/or setting it, right?

Well, these errors really did go away... but even after restarting the apache server, those macros still wouldn't work.

So this is how things went up until today. It ends up with me not having a clue where else to look to try and fix the macros and make them work so I could start coding and having some results... or does it?

This was a post about a failure, but...

Whoever wrote that "often times writing a blog post will help you find the solution you're working on" on the e-mail we received when we where accepted for Outreachy... damn, you were right.

I opened the command history to get my MoinMoin instance running again, so I could verify that the names of the macros that worked and which ones didn't were correct for this post, when...

I cannot believe I couldn't figure out.

What had been happening all this time? Yes, the .py macro file should go to moin/data/plugin/macro, but not on the directories I was putting them. I didn't realize that all this time, the wiki wasn't actually installed on the directory yourwiki/data/plugins/macro where the extracted source code is. It is installed on /usr/local/share/, so the files should be put on /usr/local/share/moin/data/plugin/macro and of course I should've realized this sooner, after all, I was the one to install it, but... it happens.

I copied the files there, set the appropriate owner and... IT-- WORKED!

Mozilla Firefox screenshot showing MoinMoin wiki with the EventCalendar plugin working and displaying a calendar for January 2018

Renata D'Avila | Renata's blog | 2018-01-17 19:49:00

Here is the interview series of posts. The Part 1 of this post is here.

In this tech-cartoon post, I am trying to explain the life cycle of variables, the temporary dead zone (TDZ) and how storage is created for let vs var in loops.

let vs var_1


let vs var_2


ExploringJS ES6 variables

Princiya Marina Sequeira | P's Blog | 2018-01-12 06:50:44

Cover image courtesy – here.

This series of posts is aimed at preparing oneself to face JavaScript technical interviews. Here is the source code.

In this post, I shall talk about let vs var, scoping, and hoisting of JavaScript variables.

let vs var

The main difference between let vs var is that they have block vs function scope respectively.

What is scope?

Scope is a set of rules for looking up variables by their identifier name.

In simpler words, scope determines visibility for variables.

Simple example:


Screen Shot 2018-01-11 at 20.12.30

In the first case, there is a ReferenceError, because the variable tmp is declared outside the scope of the executing code.

Declaration vs Definition of a variable

var tmp Here, the variable tmp is only declared. It isn’t defined, i.e. it isn’t assigned any value.

var tmp = 123 The variable tmp is both declared and defined here.

Variable defined, but not declared

All undeclared variables in JavaScript are global variables.

Screen Shot 2018-01-11 at 21.14.45

ReferenceError vs undefined

Screen Shot 2018-01-11 at 20.46.20

Variable hoist is undeclared, but notice the text for ReferenceError in the above message – hoist is not defined! Think of opening a PR to one of the browser JS engines to fix this message?!


Hoisting is a JavaScript mechanism where variables and function declarations are moved to the top of their scope before code execution.

Variable hoist is hoisted in the above code snippet and therefore logs undefined. In case of hoisting, this is how the JavaScript interpreter sees the code:

Screen Shot 2018-01-11 at 21.24.56

Temporal Dead Zone

A variable declared by let has a temporal dead zone (TDZ). When entering its scope, it cannot be accessed until the execution reaches its declaration.

In simpler words, let variables aren’t hoisted!

let vs var in loops

A var in the head of a for loop creates a single storage space for that variable.

In a let, a new storage space is created for each iteration of the loop.

Screen Shot 2018-01-11 at 22.05.23.png

The next time, someone asks you SCOPE, these are the things to keep in mind!

Source code for all the above code snippets is here.


Princiya Marina Sequeira | P's Blog | 2018-01-11 16:42:08

Happy 2018 everyone! I wish you all a great year ahead. This post is to summarise my hits and misses for 2017. Here is the biannual update and lot of other things got done after that.

  • 2017 has been my most productive year so far, personal & professional.
  • I submitted my first Pull Request to Open-Source in January
  • I applied for Outreachy and was one of the selected interns
  • My open-source contributions on Github are well fed and nurtured this year. They look absolutely healthy, and so does my blogging impact!
  • Mozilla Hacks blog post is coming soon
  • I travelled to the US the very first time this year
  • I turned 30!
  • I delivered my very first workshop at a technical conference on WebRTC and the related blog post series has been getting quite some traction
  • I participated in the Mozilla Open Leader program and successfully completed itScreen Shot 2018-01-10 at 23.09.11
  • I launched my website
  • I started tech-cartoons
  • My learning curve shot up. I achieved quite a lot from my reading list. Dealing with web performance and testing being the significant ones and the most pending items from my list to be checked off this year

There is still lot of scope of improvement (the learning never stops). I have had a fair share of misses in my interview attempts at various companies this year. This asks me to be more confident at technical interviews. To be confident, I need to be thorough with my basics and fundamentals and I am working on this.

I continue to write, learn & grow for 2018!


Princiya Marina Sequeira | P's Blog | 2018-01-10 17:42:56

Image by Florian Rathgeber

What is Outreachy?

Outreachy organises three-month paid internships with free and open source software projects for people who are underrepresented in open source and in tech generally.¹ The goal of Outreachy is to “create a positive feedback loop” that supports more women participating in free and open source software.²

Organisations that participate in Outreachy include GNOME, the Linux Kernel, Wikimedia, Debian, Git, and OpenTracing. These organisations have specific projects for the interns to work on and provide remote mentors.³

The programme runs twice a year, with summer and winter cohorts.

I recently joined the December 2017 cohort of Outreachy, and now have the opportunity to be mentored by collaborators on the Jaeger tracer while contributing to the Jaeger Node.js client!

Jaeger mascot

What is Jaeger?

Jaeger is a distributed tracing system, developed by Uber Engineering, and released as open source in early 2017. A few months later, Jaeger became the 12th project to become hosted by the Cloud Native Computing Foundation.

Note: Jaeger (ˈyā-gər) is German for hunter or hunting attendant

What is a distributed tracing system?

A distributed tracing system is used to monitor, profile, and troubleshoot complex, microservice-based architectures. Modern microservice architectures have numerous advantages; however, there can be a loss of visibility into a microservices based system, specifically the interactions that occur between services. Traditional monitoring tools such as metrics and logging are still valuable, but they fail to provide visibility across services. This is where distributed tracing thrives.⁴

Distributed tracing provides observability of what is happening inside the flow of an application. Distributed tracing systems use traces to tell the story of a transaction or workflow as it propagates through a potentially distributed system. Jaeger, as a distributed tracing system, provides a concrete set of tracers and a trace storage backend, for usage on applications and microservices instrumented with OpenTracing.

What is OpenTracing?

Many contemporary distributed tracing systems have application-level instrumentations that are incompatible, meaning that vendor lock-in can occur as systems become tightly coupled to a particular distributed tracing implementation. OpenTracing, as an API standard, provides a series of vendor-neutral and language agnostic APIs for distributed tracing systems. OpenTracing thus makes it easy for developers to switch tracing implementations.

Jaeger has OpenTracing compatible client libraries in Go, Java, Node, Python, and C++. I am contributing to the Jaeger Node client and, as a first step, am rewriting the OpenTracing tutorial for Node.js. That should be my next blog post! :)

How does the first photo fit in?

In early December, I spoke about Outreachy and how to apply at codebar’s annual 24 Pull Requests event. codebar runs regular programming workshops for underrepresented people with the goal of making technology and coding more accessible. At 24 Pull Requests, mentors help students who are new to programming learn how to contribute to open source and make their first PRs. The photo is of all the mentors at the event who are alumni of Founders and Coders, the full-stack JavaScript bootcamp I attended.

Supporting diversity in tech and in open source is important to me. If you have any questions about Outreachy, or would like advice on applying to the programme, I would love to chat with you! :) DM me on Twitter @KaraMarck.

[1] Eligibility requirements for Outreachy can be found at: https://www.outreachy.org/apply/eligibility/

[2] https://en.wikipedia.org/wiki/Outreachy

Information about diversity in open source:

[3] Additional information on the organisations and projects participating in Outreachy winter 2017 can be found at: https://www.outreachy.org/apply/rounds/2017-december-march/

[4] ‘Evolving Distributed Tracing at Uber Engineering’ by Yuri Shkuro: https://eng.uber.com/distributed-tracing/

Kara de la Marck | Stories by Kara de la Marck on Medium | 2018-01-08 15:13:56

Argument mining in text is something that has been a subject for research. However, looking for claims and arguments that support a statement hasn’t been explored because of the lack of datasets. The authors look to fill this research gap.

Purpose: There has been a lot of research in the past regarding argument mining in text. The authors fill the research gap which exists by looking for sentences level arguments that support a user claim.

Data: No previous data existed that could be used by the author. The authors mined data from idebate.org which is a website where a user claim is provided with a lot of pros and cons. The sentences can be classified into STUDY, FACTUAL, OPINION or REASONING. The Cohen kappa score for all the categories except for REASONING were in the range 0.6–0.7. The disagreement between the annotators for REASONING resulted in a kappa score of 0.25. It was eventually resolved with a final average score of 0.8.

Approach: The authors used basic features like parts of speech, sentiment features, discourse features from Penn Discourse tree bank and style features. For the style features the authors measure word attributes like dominance, arousal, valence, etc. Similarity features like TF-IDF, word embeddings, some composite features.

Results: Since no previous research in this field existed, the authors use W2V similarity and TF-IDF similarity as baselines. The metric used is Mean Reciprocal Ratio. The approach proposed by the authors is better than the baseline.

Future work: In my opinion word2vec can be used for comparing the sentences in the training and testing set as one of the measures.

My aim initially was to like a blog post like https://blog.acolyer.org/.
This post leaves a lot to be desired but I’m not motivated to write on a Sunday night. I’ll start early next week! :)

Gargi Sharma | Stories by Gargi Sharma on Medium | 2018-01-07 22:14:09

Here are our new attempts! We are trying to create a logo for Debian-Mobcom group.

debdeb LOGOS

And again any suggestions are appreciated.

Kira Obrezkova | My Adventures in Wonderland | 2018-01-07 10:44:22

Hello again! Recently I faced the problem that I needed to recover the code, which generated a dll, only by looking at the dll itself.

The project had many independent contributors, and was deployed to a few different environments, what triggered a few versions from local branches.

Version numbers alone didn’t solve it, but let’s talk about them first.

Versioning is a common problem in software development, however, there isn’t a consensus about how to properly version a project. There are some guidelines, and a lot of discussion out there, but in the end, the team should choose what works best for them.

I like to use three numbers, major.minor.revision, starting at 1.0.0 it progresses like that:

  • Increase major version whenever the changes aren’t backwards compatible. Usually those are great changes and this number shouldn’t be increased often.
  • Increase minor version whenever you add a new feature which is backwards compatible.
  • Increase revision version after minor changes, like bug fixes, organization commits etc.

I find those three numbers enough, along with the trick I’ll show you next, but a fourth number at right might be useful:

  • Local version, it means how many local changes, not pushed commits for instance, were made.

Back to the initial problem, from its many solutions, I found best to input git info into the dll version, and make it automatic, then there is no chance to forget doing it. The trick was to use hooks.

First I tried git hooks, it didn’t work, but let’s take a look at them anyway :3

Git hooks are shell scripts that are hooked to an action, they are executed whenever the action is triggered, before or after it, depending on the hook. Those scripts must be put in the folder ‘ProjectFolder/.git/hooks’, you can see a few samples from git in that folder already. To activate a hook just remove the .sample extension and it’s ready.

The idea was to write the git hash in the version file right after a commit and amend the changes, that’s it, only a post-commit hook needed.

Little did I know that there is no amend, there is only removing the last commit, adding the new changes and making a new one, with a different hash, then the hash number in the version is meaningless.

Luckily there are also build hooks, the same principle of git hooks applied to the building process, then the solution was to write the git hash in the version file right before building the dll. I used Visual Studio 2013 and C# for this but it should apply to other tools as well. (This one works)

Actually, I preferred to create an additional version file, containing only the git info. Its possible to overwrite the standard version file but I didn’t want to unload the project for every version change. Probably this can also be avoided but I didn’t find an easy way :3

Visual Studio offers a visual interface to define hooks, go to Project Properties -> Build Events and you can see the text boxes for Pre-build and Post-build events. As far as I know those commands will be executed in the Windows PowerShell at the right time. You can also define the hooks directly in the csproj file, which I preferred.

This file is a XML file, where the propertie DefaultTargets, of the Project tag, register the hooks, having Build as the main build event. The events are executed in the same order they appear. Take a look:

<Project ToolsVersion="15.0" DefaultTargets="Version;Build;Clean" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

You can see that the Version hook, which creates the version file, is executed right before building the project. After building the Clean hook, which deletes the version file, is executed.

Ok, now we just need to register the hooks, luckily its very easy, just create a Target tag, right under the Project tagwith the hook name as a property, like this:

<Target Name="Clean">

<Delete Files="Properties\AssemblyGitInfo.cs" ContinueOnError="true" />


The only missing piece is the git hash, to add it into the version I used the package MSBuildTasks, its available via NuGet. We just need to install it and add the following tag in the csproj file:

<Import Project="..\packages\MSBuildTasks.\build\MSBuildTasks.Targets" />

Right under the tag:

<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

Beware of the MSBuildTasks version, check which one is installed in the packages folder inside the project folder.

With MSBuildTasks you can use the GitVersion tag, which defines a few environment variables, being one of them the git hash. Since a code snippet is worth a thousand words:

<Target Name="Version">

<GitVersion LocalPath="$(MSBuildProjectDirectory)">

<Output TaskParameter="CommitHash" PropertyName="CommitHash" />


<AssemblyInfo CodeLanguage="CS" OutputFile="Properties\AssemblyGitInfo.cs" AssemblyInformationalVersion="git hash - $(CommitHash)" />


You can see that the git directory is defined and that GitVersion outputs the parameter CommitHash from its inner property with the same name. Right after, this parameter is used in the AssemblyInformationalVersion as “git hash – $(CommitHash)”.

The whole csproj file will look similar to this:

<?xml version="1.0" encoding="utf-8"?>

<Project ToolsVersion="15.0" DefaultTargets="Version;Build;Clean" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">


<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

<Import Project="..\packages\MSBuildTasks.\build\MSBuildTasks.Targets" />

<Target Name="Version">

<GitVersion LocalPath="$(MSBuildProjectDirectory)">

<Output TaskParameter="CommitHash" PropertyName="CommitHash" />


<AssemblyInfo CodeLanguage="CS" OutputFile="Properties\AssemblyGitInfo.cs" AssemblyInformationalVersion="git hash - $(CommitHash)" />


<Target Name="Clean">

<Delete Files="Properties\AssemblyGitInfo.cs" ContinueOnError="true" />



The result is that after building the project, the output file property “Product Version” contains the git hash.

Hope this can be useful to someone 🙂

Elise Lennion | Elise Lennion | 2018-01-07 01:09:49

In the last article I described preparation steps for building a package with sbuild. Now I want to describe how to make our package available on mentors.debian.net. This site is intended to help people to upload their packages to debian. Since, only several people has such a permission other people should upload their packages and look for a “sponsor”. The meaning of this word is described better on the site. In this article I want to describe what steps should be taken in order to upload our package into the site.

Preparing a package

Firstly, we should build and sign our package. But before signing it we should generate our signature:

gpg –full-generate-key

we can list our keys:

gpg –list-secret-keys –keyid-format LONG

and now we have an ability to see our public key:

gpg –armor –export <our key id>

The printed key we will have to copy to our mentors.debian.net profile in order to allow the site to check our signature on uploaded packages.

Now we can build and sign our package:

dpkg-buildpackage -S -d –sign-key=<our key id>

Note that here we use -S option which means that we want to build a source package.

We can build out package with sbuild:


Now we should check out package with lintian:

cd ..

lintian -I -E –show-overrides –pedantic `ls *.changes`

It is good if there are no warnings in lintian’s output.

Now we can try to upload our package. To achieve this we can use two different tools  dput or dupload. I will use dput-ng – the new generation of dput tool. First of all, we should create a configuration file. It’s format is described here. At this stage we should have an account on mentors.debian.net with our  public key (signature) being written inside of it. The command is quite simple:

dput mentors libosmo-sccp_0.7.0-4_source.changes

After this we should receive an e-mail about the package upload and see it in our profile under ‘My packages’ tab. Further, I want to describe some difficulties which I encountered during the process of the package upload.

Problems and issues [in work:add more links]

Though I was able to upload the package I still have some problems.

First of all, it is not clear for me what happens during dpkg-buildpackage and sbuild phases. I am going to describe it in details in the next post.

Second of all, technical problems and discovered problems with documentation. On the one hand, it is not very clear what should be placed in the debian folder. And on the other hand, it is hard to understand what the content of the placed files is.  For example, in the documentation some fields of the rules files are marked as required, but in many packages they are absent and generated automatically.


Kira Obrezkova | My Adventures in Wonderland | 2018-01-06 08:54:55

As my second batch at the Recurse Center is approaching, I can’t help but to be excited for all the pairings to come. I enjoy learning by working with others - other than improving my coding style and understanding how to solve problems better, I always pick up all sorts of little tweaks and ideas while geeking out over someone’s setup.

Since many of my batchlings are going to be new to the community, I wanted to think of some rules for pairing that could improve the experience for all those involved, other than the Recurse Center social rules.

In general, I find there are a at least three types of potentially great pairing experiences:

  • helping someone out - when one of the parties has an issue and they’re looking for some guidance from a pairing partner more experienced in the area of interest
  • exploring a concept together - when both parties are looking forward to tackle a new problem together, regardless of their perceived experience
  • rubber-ducking - when one of the parties needs to talk to another programmer, be it on a more general, problem-solving level or particulars of their code that make for a mind-boggling puzzle they’re struggling with, again - regardless of their perceived experience (it is often the “stupid” or “beginner” questions that we forget to ask ourselves when debugging our own code that are just the right questions)

Have a good setup

Don’t assume your setup is going to work for pairing out of the box. Think of possible obstacles and make sure your pairing partner is comfortable, e.g. I promise to increase my 8-point font and adjust the brightness, so you can read everything you need on my tiny screen.

The best approach is to find the lowest common denominator - no Vim/Emacs bindings, no dark magic shortcuts and gear. Using a text editor with at least some GUI (VS Code, Atom), a separate window for the terminal and a browser window for web testing usually works. Sometimes even a simple service like Glitch or CodePen is enough.

Keyboard is lava

During pairing it’s usually best that only one person types - they’re called the driver and their role is to implement your shared ideas in code. The more beginner they are, the better for them to be the person writing the code, since they can make mistakes and learn from them through discussing and correcting them together. They won’t learn as much from simply observing.

Even if you are much better at typing, please, don’t take over the keyboard. It happened to me more than once, usually out of the sheer eagerness of my pairing partner to just solve the problem at hand, and I always felt intimidated by it.

Know what’s important

When pairing, try to focus on the problem at hand - you are giving/receiving the most important asset one can give, their time and attention. There is no point in focusing on unrelevant details, e.g. I might write terribly suboptimal Python, but if the purpose of the pairing is first and foremost for me to understand how merge sort works, optimizing too early might not be all too helpful.

  1. When helping - depending of how experienced your pairing partner is, there might be an urge to solve every possible problem together - if you have a ton of time, that’s a wonderful way to spend the afternoon/night! Learning from more experienced developers is always fun, especially when they can also learn a trick or two from their pairing buddies.

    Most of the time the pairing is focused around a particular problem. If you are helping someone out, they probably went through at least a few ideas and have tried various approaches. Before you get started, make sure you know:

    • what is the problem
    • what has already been tried
    • what kind of help are they looking for
  2. When asking for help - before you ask, try. Try to understand where and why you have a problem, so that you can describe it to your pairing partner. Check for all the usual suspects (semicolons, not passing arguments, not calling the function, typos) - you shouldn’t be embarassed if your problem turns out to be “just” about a typo, but more often than not you probably will feel bad for “wasting” someone’s time (please, don’t - you learn by making mistakes).

    Before you get started:

    • give a brief overview of your code
    • describe the problem and the approaches you already tried
    • talk about your hypotheses as to where the issue might be
    • say what kind of help you are looking for
  3. When pairing on a new problem - be it a “Cracking the Coding Interview” question or a project you are building together, try to define what are you looking to do during the pairing session. Decide together on the setup, stack and other relevant technical details.

Be mindful

Pairing is meant to be a pleasurable experience for all parties involved, but we do make mistakes - you might tell an unfortunate joke or the solution you propose ends up causing a stack overflow. That’s all good, as long as you are mindful and open to correcting any missteps.

Think of pairing as canoeing together (or dancing, if that’s a more appealing metaphor). There is a rhythm to it and a direction you’re working towards, but there is also the pleasure of being in the moment together and enjoying the experience. If you’re going to go in different directions or not find a shared rhythm to it, the canoe might end up spinning in the middle of the river and you will get nowhere.

Finding the right balance might take time, but it’s rewarding. Also, you don’t have to have a great experience with everyone - even when you’re both excited about programming and mindful, you might just not be good pairing partners. And that’s okay.

Alicja Raszkowska | Alicja Raszkowska | 2018-01-03 17:25:00

Below are just some appointments that I had made while working/studying MoinMoin wiki in December.

Just download and install it?

Downloading MoinMoin Wiki comes with a very nice message saying: 'You also need to apply this bugfix patch, sorry.' Which meant nothing to me, because I had never applied a patch before. And, really, that is not very welcoming to new people, it makes me wonder why couldn't they just provide a final corrected version of the code for instalation? But, anyway, I had to learn how to do that.

Applying a patch on GNU/Linux

At first, I had a bit of trouble trying to figure out what 'applying a patch' meant in practical terms (what I had to do to apply them). I had created a patch before, when contributing to some translations, but I had never applied a patch.

I searched online for a bit and... it ended up not being as difficult as it sounds (but it would be so much easier not having to do it to begin with). First of all, I had to find the 'raw commit' on the link provided and download the file to the moinmoin directory. Not only that, but this file should be saved with a .diff extension, for instance: patch_file.diff.

Then, inside the directory the command patch should be used:

patch -p1 < patch_file.diff

And done! Patch applied to all the four files.


From my understanding, to use EventCalendar, at least a page inside a category had to be created.

To insert an event, insert the event information in any pages of specified category (CategoryEventCalendar by default). https://moinmo.in/MacroMarket/EventCalendar

That meant I had to play a bit with MoinMoin and get familiarized on how to use it. Sure, I had used a wiki before (Wikipedia and, as I mentioned in another post, ikiwiki). I spend a whole day just doing that, learning how to create new categories on this wiki and creating new pages, inserting the new pages in a category, listing them and such. Was CategoryCategory a thing? Should I just write EventCalendar or CategoryEventCalendar? This last one won, because that seems to be the pattern for the MoinMoin website itself.

Still on Macros

A macro is entered as wiki markup and it processes a few parameters to generate an output, which is displayed in the content area.

MoinMoin allows two types of macros:

  • Standard macros Standard macros are macros which code comes with MoinMoin code. They are ready to use.

Macros reside in MoinMoin/macro and data/plugin/macro.

The macro object can access the following objects:

macro.request (see MoinMoin/request)


macro.formatter (see MoinMoin/formatter for API)


macro.formatter.page (see MoinMoin/Page.py)


Macros should use macro.formatter to create the output and return it.

That makes sense, now. EmbedContent uses this macro.formatter and EventCalendar uses it as well. This is called when the macro is executed (on EventCalendar, it calls a setglobalvalues function and uses global variables to it).

  • Macros from the macro market This is the place to find new macros - and to publish macros developed by the community. There are some guidance for developers, talking about security and documentation, not to mention on how to publish there (create a wiki page).

Macros from the macro market have to be downloaded and put on the correct directory where MoinMoin wiki is installed to work. I noticed that, for some reason, wget can't be used to download the .py file with the macro code, which sucks when you're using just a terminal on a server.

Renata D'Avila | Renata's blog | 2017-12-31 02:49:00

As a former Outreachy intern, I was blessed with the opportunity to present my work at the Open Source Summit Europe 2017 held last October 23 to 26. At the conference I was able to meet my mentors, other Outreachy interns and other people passionate about the Linux Kernel and Open-source.

First Time Speaker

This is my first technical conference and I attended as a speaker. I’m a newbie at public speaking and so I had to make preparations for it months before the conference. I participated in a panel discussion entitled “Outreachy Kernel Internship Report” (slides here), with the entire session lasting about 40 minutes. Since it’s a panel discussion, each participant is allotted 5 minutes to present.

This 5-minute talk can be classified as a lightning talk1 and it’s a good opportunity for first-time speakers like me to hone public speaking skills. The short duration allows you to practice many times!

Talk Preparation

I have read a lot of articles on the Internet about giving lightning talks. One of the resources that stand out was this: How To Give a Great Tech Talk2 which covers all formats not only lightning talks.

I also need to consider speech that promotes inclusiveness and so I took advantage of the free offering that is “Inclusive Speaker Orientation”.

If you are curious about the articles, here are some:

Networking Opportunities

The conference granted the opportunity to meet some of the mentors who were instrumental in my involvement in the Kernel community: Julia Lawall, Daniel Baluta, Rik van Riel and Greg Kroah-Hartman. Apart from meeting mentors in person, I had the opportunity to be introduced to other people in the conference as well as see in person key people in the community like Linus Torvalds.

In as much as I’m grateful of the opportunity, I was not able to make the most out of it. Being a first-time attendee meant that I knew only a handful of people and if it happened I attended a different conference then I absolutely don’t know anyone from the crowd! I’m not particularly shy but it got a hold of me at times especially during the Women in Open Source Lunch sponsored by Adobe.

Women in Open Source Lunch Women in Open Source Lunch Group Photo3

There, I was able to mingle with other women attendees. It’s a regret I didn’t get to introduce myself more to people there, I got tongue-tied and didn’t take much initiative to start even a small-talk! With this experience, now I understand why people dread networking gatherings.

I vow to do better on next opportunity.

Favorite Sessions

Here’s the chronological list of my favorite sessions.

Free and Open Source Software Tools for Making Open Source Hardware - Leon Anavi, Konsulko Group

Consider the design of the case simultaneously with the design of the PCB

According to Leon, open source hardware is a viable business model and there are free and open source tools available for designing such hardware.

A fully open source physical product consist of an open source PCB, open source software and open source case. The session gave awareness on how open hardware is licensed as well as available FOSS tools that can be used to design the PCB and the case.

Fast and Precise Retrieval of Forward and Back Porting Information for Linux Device Drivers - Julia Lawall, Inria

Julia, one of my mentors, presented how Prequel can be used to aid in porting of drivers. If you are familiar with Coccinelle, you’ll be at home with this tool.

Prequel is a patch-like query language for commit history search. To ease driver porting, gcc-reduce was used to reduce error messages and prequel was used to search commit histories.

For 86% of these 80 issues, the first commit returned by Prequel gave the needed information

For the full-picture of the process, you may check out the slides4.

I’m glad this tool exist. I would normally use any of these commands to do something similar:

git log --grep='search string'
git log --pretty=oneline --abbrev-commit -S "search_query"

Search the code based on a keyword:

git grep 'keyword' /path/to/be/searched/

See who did the change and why:

git blame -L line1,linex file_path.c

How Not to be a Good Linux Kernel Maintainer - Bartlomiej Zolnierkiewicz, Samsung Electronics Polska Sp. Z o.o.

No starting guide for new maintainers

For budding new maintainers out there, Bartlomiej pointed out several technical and social mistakes that may happen and provided insight on do’s and what NOT to do in those situations.

Also, Bartlomiej notes that the Documentation/process/ is developer-oriented but except for Documentation/process/management-style.rst5

Dirty Clouds Done Dirt Cheap - Matthew Treinish, IBM

Any guesses what this means

A fun session, Matthew detailed his journey project on building an OpenStack cloud on a $1500 budget with the outlook of someone who has no prior knowledge of OpenStack. What made this more interesting is on the reliance of using release tarballs than using packages which resulted to the the majority of issues encountered.

The project enabled enumeration of areas that need to be improved upon such as that of logging and error reporting.

If you want to learn more about this subject, you may check out this video from another conference and for further details, this blog post.

Why Should We Care About Kernelnewbies! - Vaishali Thakkar, Oracle

Newbies often looks for more ideas but don’t know where to find them

Being a Kernelnewbie myself, I can relate with the difficulties discussed on this session. Vaishali listed the following obstacles in a Kernelnewbies journey:

Setting up an environment

In Outreachy, we have this First Patch tutorial. Having gone through that, I agree with Vaishali that although it helps in setting up an environment for Kernel development, it doesn’t explain how kernel development process6 works.

First successful contribution

Maintainers pick up the patches based on their work flow and these patches get sent through a mailing list. Due to the volume, you may not get a response right away and worse, you’ll have to re-send due to the patch getting lost.

Find tasks for quality contribution

TODO files are often outdated. You may need to post in the mailing list to ask for ideas. Personally, I got this problem before but got around it by asking and/or observing discussions and trends.

Quality contribution in the kernel

Sending patches about a relevant task may need discussing with the community first.

Vaishali proposed the project Kernel Bridge to help address these concerns. If you are interested in contributing, you may check out the slides7 and the GitHub8 page.

Overall, that was a fun week!

I would like to thank the Linux Foundation for providing travel assistance.

Eva Rachel A. Retuya | Rachel's Blinking LEDs | 2017-12-29 00:00:00

I had my last exam of Btech on 7th December, from 8 to 11 am. Next day, on 8th December, from 5 to 7 PM I had my lab exam of Real Time Systems. Luckily, It went pretty well. There was a farewell dinner organized for us in the college mess-A, but we had other plans. All of us rushed to our rooms, called each other to confirm if the plan was on, and started getting dressed up. I begged Neha and Anupriya to join but they didn’t. Nevertheless, I got dressed up in all black, picked a coat, kept my brush in the wallet and ran to the main gate. We settled on the last seat. We were 6, Mukesh, Simran, Somya, Nema, Sharma and me. Since this was a secret escape, we gave our keys to Seren. After a little argument in the bus, the places to visit for the night were decided.

Around 9 PM, we reached the Indian Coffee House for dinner and the waiter literally threw us out, but the receptionist was kind enough to offer us some food, but we were allowed to make the order only once. So after some good South Indian food that was served, the place was closed and we were pushed out. We made calls to the hostel we planned to stay at, and because of Swati and their group, we were offered a place. Since this problem was solved, we went to McDonald’s for another round of dinner. I tried their ice tea, it was good. Some Instagram posts were made, and a lot of crap was created. From there we left for 100% Rock Bar, it was around 10:30 PM when we reached there and after ordering the drinks we jumped on the dance floor, the music was loud AF.


We danced so much, the songs were a blend of Punjabi songs and some Bollywood. If I remember right, only two English songs were played. We were joined by a foreigner too who was in shorts and was enjoying way too much, joining every group once in a while. We made formations, copied each others’ steps and I screamed at the top of my voice. The bouncer had to come and tell Mukesh that one was not allowed to sit on the dance floor. Near 12:30 AM when the music was turned off, we headed out, clicked a lot of pictures and booked cab for the hostel.

So I, Nema and Sharma were in a cab, others in the other one and we didn’t know the way to that hostel, we were really scared. We turned into a little street that was a bit uphill, plus it was narrow. The locality was dead asleep at that time. At the entry of an avenue, the other three were standing. Our car stopped, the look at my friends’ face was comical, Somya was angry as I climbed out and said,”I am so sorry for suggesting this place, God help us tonight. I hope we stay alive.” I was the one who, after taking in advice from Swati, finalized this place to spend the night at.


So one guy from the hostel escorted us, some xyz streets we turned and went deep into the neighborhood. There was dead silence except for the noise of Mukesh’s heels and I tried to joke with Nema. Everyone kept asking that guy, “How long, how far?”

Hence, we reached the great place – “Chalo Eco Hostel”



Low cost hostel for bag-packers, everything there was handmade, the windows above the door, instead of having glasses, had bottles stuck with white cement and they gave out colors. The walls along the stairs had colorful flowers painted, the tables and the door of closets were made from wood planks of carets. There was a swing on the second floor where we were given a room for 6 people (Mixed veg they named it), a dormitory with yellow beds, a lamp attacked to each, blue pillow covers that read “Wild and Free”. The floor had two shared washrooms that were very clean. Below the AC it was written that save electricity because “dharti” needs it. On the door of the washroom it was written that “We wanted to hang a mirror here, but you wouldn’t like to see yourself in this state :P” The terrace was the smoking area, with jute chairs, dumbbells, sigdi for burning coal, and a bed. The walls were painted with bright colors and colorful duppata were hanged for decoration. I sipped my morning tea at 7 AM standing on a bench, looking at the range of the hills.


All throughout the night, we talked, laughed, giggled and made memories. I dozed off around 4 AM, but the laughter of my friends kept me in a subconscious state.  All in all, it was an unexpected experience.


In the morning we were running late to catch the college bus, the owner of the hostel took us through the lanes, he was such a helpful person. He took the pain of leading us out of the locality where no cab was coming to pick us up, moreover he booked an auto-rikshaw for us. We got down at the Pink Square mall and got on the bus from there. The sunlight felt warmer, I smiled.


We went for the breakfast together and the battle to open my room’s lock started after that. That’s another story, for another day.

Sonali Gupta | It's About Writing | 2017-12-28 13:40:05

Loved your article. Can relate to the part where you say more of the learn comes from outside the classroom.

Gauri P Kholkar | Stories by GeekyTwoShoes on Medium | 2017-12-26 07:39:38

Our Individual Engagement Grant has been renewed! 🙂 Thank you so much to everyone who supported our application – reading all of the wonderful comments has been extremely meaningful and encouraging for us.

We have been hard at work getting version 2.6 of the app out; there was a bit of a rocky start with the first few beta iterations, but we finally have v2.6.5 in production! \o/ Improvements in the current version include:

New UI

  • A brand new login screen
  • New design for the list of nearby places that need pictures
  • New navigation drawer design with username displayed
  • The upload screen has been remodeled to include tooltips for title and description fields, as well as an explicit copyright declaration and a link to Commons policies
  • Improved media details view with links to the selected license, categories and image coordinates

Category search

  • Fixed common issue with category search getting stuck forever
  • If a category has an exact name entered, it will be shown first
  • Added filter for irrelevant categories

Nearby Places that need pictures

  • Fixed issues with GPS that was causing “No nearby places found” errors
  • Improved handling of the refresh button


  • Added product flavors for the beta-cluster Wikimedia servers – this allows developers to test most of the features in our app without having to upload test pictures to the actual Commons server
  • Added RxJava library, migrated to Java 8
  • Switched to using vector icons for map markers
  • Added several unit tests
  • Converted PNGs to WebPs to reduce APK size


  • Modified About page to include WMF disclaimer
  • Modified Privacy Policy link to point to our individual privacy policy
  • Switched to using Wikimedia maps server instead of Mapbox for privacy reasons
  • Removed obsolete EventLogging schemas from app  for privacy reasons

Misc improvements

  • Fixed crash when using the camera
  • Reduced memory leaks and battery usage
  • Multiple fixes for various other crashes/bugs
  • Various improvements to navigation flow and backstack
  • Added option for users to send app logs to developers to help us troubleshoot any problems they are experiencing (has to be manually activated by user)

Facebook page

We have created a new Facebook page that we will be posting updates on!

Stats for 2017

The end of 2017 is nigh and the stats are in! Roughly 24,000 new images were uploaded via our app this year, and only a fraction of them required deletion. 🙂



Coming up next….

We are also currently in the process of implementing a few major new features that we anticipate will be included in our next release:

  • A new and improved Nearby Places UI with hybrid list/map and bottom sheets for individual places
  • Direct uploads from Nearby Places with relevant title, description and categories suggested
  • Two-factor authentication login

Happy holidays, and stay tuned! 🙂

Josephine Lim | cookies & code | 2017-12-23 15:24:31

This story tracks the beginning of my journey with Mozilla, where I worked on making error messages for lifetime conflicts user-friendly and simple but precise for the Rust Programming Language.

The first time I showed interest in making a contribution to Rust, my mentor Niko Matsakis scheduled a video chat with me and explained to me about the project which titled was making an Unsafe code linting tool for Rust. He then presented a few code snippets to help me understand where the problem lay and drew similarities with C++, since I was new to Rust then. As much as I was nervous, I loved the idea of developing the unsafe linting tool. Then we went on to discuss how I should proceed with my first contribution and we zeroed in on Rayon, a parallel data library for Rust.

My first contribution was this

add test that drives the collect-consumer, producing too less values by gaurikholkar · Pull Request #264 · nikomatsakis/rayon

The reason we went ahead with Rayon was because the linting tool would be tested with Rayon, as it had a huge amount of unsafe code. I wrote unit tests for the collect consumer, closing an existing issue —

Negative tests abusing the consumer protocol · Issue #53 · nikomatsakis/rayon

and successively made my second contribution too

Adding tests for failing to call complete, produce fewer,more items incase of splitting. by gaurikholkar · Pull Request #269 · nikomatsakis/rayon

The happiness you feel when you make your very first PR issue is 💓

This helped me understand how the collect consumer worked and I got to see examples of unsafe code.

Going ahead, my mentor searched for beginner level issues, keeping in mind that the more I get used to Rust, the better and decided that I should fix issues in the Rust Compiler itself. My first reaction must have been 😯 as I was quite new to open source and my first impression of making a contribution to the compiler itself meant something difficult.

I remember discussing with a friend that Rust is my first open source Contribution and she was like you’re very lucky ☺

I’ve heard of contribution stories going wrong. Not that it ever mattered but I’ve felt so loved by the community that I’m really grateful for the experience.

On the rust compiler, I was to work with the region inferencing part of the compiler and the first issue was to fix an existing error message.

issue on let suggestions

and the PR link for the same.

Disable ref hint for pattern in let and adding ui tests #40402 by gaurikholkar · Pull Request #41564 · rust-lang/rust

I then worked on the same issue, introducing another change in the error message

Consider changing to & for let bindings #40402 by gaurikholkar · Pull Request #41640 · rust-lang/rust

Now came the time when I had to build the compiler locally, running ./x.py build — — incremental and ./x.py test src/test/ui constantly, often having my share of struggles with the borrow checker, but nothing short of a learning experience. I followed the mentoring instructions written by my mentor and wrote my first few lines of proper Rust code. Over time, I’ve grown fond of the build commands :p.

I remember talking to Niko Matsakis and he suggested picking up the harder of the two issues that I was planning to work on and that decision helped me throughout the internship. It familiarized me with building the compiler, running the test suite and running the compiler locally on different examples to see how the code change was reflecting from the user perspective. So when later, he proposed the idea of working on lifetime errors instead of the linting tool, I took an instant liking to it. The major reason he explained was that this would have more of an impact in less amount of time. The second important reason was that I would get to explore a part of the compiler very few apart from the team are familiar with i.e. the region inferencing part. Then based on the feedback, we will be further modifying the errors more( sounds fun?). The linter would be somewhat on the lines of Clippy. The lifetime error work would be something different. As easy as it may sound, designing the message itself needed effort. You have to be simple yet precise. That’s difficult. It’s difficult when you probably want to explain everything in paragraphs so that the user experience is better but then you need to be careful too, as you shouldn’t give them too much information that they understand nothing out of it. The challenge was to think as a user. A code of hundred plus lines and what that did was just change the error message for a few cases. This is how the project title changed.

I started off a bit early, the excitement was too difficult to contain :p and there was plenty of free time. We looked at a few common examples and tried to understand how lifetimes work. Then he walked me through a bit of the error reporting code, pointing out functions being used and the types of parameters being passed and other important declarations. We picked up the simplest example, discussing it’s features in depth and then figuring out all the cases it covered.

What I was to deal with was anonymous lifetimes i.e. missing lifetime parameters. Lifetimes or regions, being among the few things that confused the Rust users, we safely assumed that if it was missing the lifetime declarations in the function or trait declarations unlike the fault being in the code itself.

That’s how we started. A new error message for functions with one named parameter and one anonymous parameter. E0621. There is a systematic procedure you follow for the same. Once you fix the error message, you need to run the test suite. You also need to add an explanation in the diagnostics.rs file which is like registering the error code. You also need to add new ui tests (show the output) and compile-fail tests. Now it might sound simple but it wasn’t.

I had to understand the way Rust Compiler defines an error, to be specific, RegionResolution Errors. I had to understand the report_region_errors() method. I have limited myself to working with only ConcreteFailure errors throughout. My first blog post speaks about it. This post speaks on how we planned to extend it for traits and impls, keeping it on hold for anonymous regions in self and return type. The most challenging part here was to replace the anonymous region with the named region here

consider changing the type of `y` to `&’a i32`

I did it using TypeFolders.

Here is the merged PR


Talking of merges, the happiness when your code gets merged with the master branch is amazing : dancer

Moving on from there, I went a few steps ahead, trying to handle conflicts where both the regions are anonymous or what we call the anon_anon conflict. It was more difficult here, there were too many cases and error design itself kept changing due to the feedback we got.

Here is the link to the PR for handling it in references.


The error message was later changed as follows


Later, I extended it to Structs.


The code needed a lot of testing as a few cases were breaking. The code reviews emphasised on the need to write cleaner code.

Currently, I’m having ongoing PRs which are to extend E0623 to named conflicts(blog on this later) and also to cover return types and self.

I’ve planned so much on how to write this post, because there is too much to write . I doubt I will do justice to it but here is my effort. Working with Rust is like the introductory hello world program for me. It’s my first exploration of open source and community contribution. It’s immense growth and love. It’s learning and satisfaction of giving back to a community that’s given you love.

I’ve got the opportunity to work with a language compiler and learn the language, understand it better via contributions. Usually that’s something that works the other way around. You learn the language before you go around digging into the compiler. I’ve had the opportunity to make error messages better for Rust users, of something that I had initial struggles with i.e. the lifetime errors. I’ve got to be the consumer and the producer both. My initial hesitation on learning a new language is now replaced with a curiosity to learn more about Rust every single day.

A huge thank you to Outreachy for organizing this amazing programme and to Sarah Sharp and all the other outreachy admins. A big thank you to Mozilla , Elizabeth Noonan and Larissa Shapiro for making this possible ☺. Thank you Niko Matsakis for being the most amazing mentor anyone could have ever asked for, right from being available to answer my silliest doubts on let and if let to helping out with understanding, writing code and reviewing it, and having an example ready on almost every topic I had doubts with, these three months have been fun to work with you and I’m sure I’m going to continue to be in touch ☺. I’ve been fortunate enough to have got to interact with a few of the Rust core team members, Niko Matsakis , a huge thank you to Ariel Ben-Yehuda (Rust Compiler Team) and Esteban Kuber for reviewing my code and being ever ready to help, Eduard-Mihai Burtescu (Rust Compiler Team) for being a regular at answering my queries on IRC and Guillaume Gomez and Jonathan Turner(Dev tools team) for their reviews and comments. Last but not the least, thank you Gitter : 💓. For reasons unexplainable, I’ve preferred Gitter more over Slack and IRC.

What I’ve learned ?

Deep dive, no first impressions.

How much ever difficult something looks, you can’t really gauge the level of difficulty until you get your hands real dirty with the code and try to experiment a bit. Logs helped me do that, to understand where I was going wrong and what the code was doing. So leap of faith.


There are days when you feel like productivity seems to be a far away dream. I’ve felt sad looking at a build fail when I know it’s supposed to work fine, I’ve spent my initial days staring at a GitHub pull request, searching for the green tick that often eluded me at the start ( stopped doing that now ;) ) and felt gloomy. Take a short break. A walk maybe. Come back afresh to realize it’s something silly and get it working. The joy that came with it was much more than what would have been if I got it right the first time. I’ve learnt to be patient with my code.


Even while I was coding , I kept the side process of learning Rust on too. I’ve been fascinated by what all the language has. Compilers was a course in my third year. It was one of my favorites. The assigned needed us to build a mini compiler for a small subset of C. It was something that interested me then. Never did I think that I would be contributing to a language compiler used by a huge community.


It’s being a huge learning opportunity for me. I have grown as a programmer and developer. I’ve become more confident and more efficient. Most importantly, I’ve found happiness in what I do. Immense satisfaction of contributing to a growing community.

Small is big

The smallest bug fix has got me back to pondering over programming fundamentals and deeper into advanced concepts.


I’ve become pretty fluent in communicating with the other contributors, asking questions when blocked on something, opening up relevant issues on GitHub and trying to be precise with my doubts.

Be better.

A better programmer. A better computer science student. A better learner. A better developer. A better person. Fortunately, working remotely wasn’t much of a problem for me.

Communicate, Ask.

When I asked my mentor if there was a faster way to build stage 1 of the compiler, he suggested using ./x.py build --stage 1 src/libsyntax. Earlier I was building the stage 0 redundantly with the ./x.py build --incremental. Often I’ve asked for help to understand certain concepts and he has given me helpful resources and pointers. This has often made me more efficient and saved a lot of my time.

Spread the spirit

I’ve been lucky to get a chance to explain my work to a fellow contributor eager to work on one of the issues I opened up and I’ve been blogging regularly to spread the spirit ☺

First times

I’ve opened up my first pull request, my first issue, my first community code review and my first technical blog written during this period. There’s always a first time.

Moral : Rust is love

Thank you for being the awesome community that you are @rustlang.

This has been one amazing journey, a journey that’s not going to end so soon ☺

Thanks for being a part of my journey. For any queries and suggestions, leave a comment below.

Gauri P Kholkar | Stories by GeekyTwoShoes on Medium | 2017-12-23 06:51:58

When everything else fails

One of the most known types of testing is unit testing. It hopes to guarantee that every module works individually. When an interaction is needed outside of its scope, this interaction is stubbed or mocked. But what happens when a new dependency is included, a new system is run or a new feature is written?


Regression, popularly known as bug, is the name given to all non-expected software behavior. When this behavior is found — sometimes outside the testing scope — and fixed, new tests need to be written to ensure this behavior will not be seen again. Those tests are called regression tests.

In large and public projects, regression tests are needed to ensure the intended behavior is kept between libraries, operating systems and machines. Moreover, this tends to be a deal breaker for global software usage. In a Libre Software project, your software being usable across many OS distributions is likely to be your main goal.

It is important to notice that regression tests can be broadly referred to as unit tests. Unit testing against desired and non-desired software behavior tends to reduce the need of future regression testing.

A Simple Start

In the ideal scenario, regression testing will work as described. Let us illustrate with a code snippet.

import Couch
class Cat:
def scratch(self, Couch):
self.happiness += Couch.damage(self.claws)

In the above example, there is a Catand a Couch class. Each of them are managed by different developers. Cat uses Couch, but Couch doesn’t need (or want) Cat. Moreover, Couch documentation states that damage() should return a value from 0 to 10 which corresponds to the hole-creating hability of the given sharp object. In our case, Cat.claws.

In this scenario, it’s easy to see that a change of behavior in Couch may cause Cat to crash. Maybe unit testing will help us?

class TestCat:
def test_scratch(self, cat, couch):
happiness_before = cat.happiness
happiness_after = cat.happiness

assert happiness_before <= happiness_after

Good enough, this test will run and pass. Unfortunately, as predictable as it can be, Couch developers wanted a way to optimize its damage taking hability and avoid possible holes. In their newest release, they updated the damage() function to also include different and unsual kinds of couch materials — such as glass and bees. As a consequence, damage() may now return values from -10 to 10.

With failing tests, we’re compeled to fix our code. By adding a few lines, it is possible to avoid this kind of error.

import Couch
class Cat:
def scratch(self, Couch):
damage = Couch.damage(self.claws)

if damage > 0:
self.hapiness += damage

However, there’s another major problem: we haven’t predicted this behavior in our unit test case. Now, we need to update our tests to foresee and avoid this output.

class TestCat:
def test_scratch_when_damage_is
bigger_than_zero_happiness_increases(self, cat,
happiness_before = cat.happiness
happiness_after = cat.happiness

assert happiness_before <= happiness_after

def test_scratch_when_damage_is
lower_than_zero_happiness_doesnt_change(self, cat,
happiness_before = cat.happiness
happiness_after = cat.happiness

assert happiness_before == happiness_after

In larger projects, this kind of errors may not be as elementary to predict. Quite on the contrary, as complexity grows, they may get impossible to predict in a unit test suite. Moreover, when dealing with many dependencies, writing regression tests gets a lot more tricky.

Debug, Test, Rinse and Repeat

During the past week, I’ve come across some interesting bugs in diffoscope which illustrate this concept. One of them being an optional dependency version issue: [#877728 — binutils 2.29.0 on x86–64].

In short, this bug is seen when running diffoscope with binutils 2.29.0 version. It’s rather common to see a change in software behavior from one major version to the other — from 2 → 3 — not that common in minor releases. In this case, let’s analyse binutils previous behavior versus the current one to see which one is wrong so we can fix it.

Acording to the bug report, this regression was found while running tests/comparators/test_elf.py, which means the culprit is readelf. Readelf is distributed along with binutils and matches its version. In this case, we know that we will probably find more clues running readelf < 2.29.0 and == 2.29.0.

After a few hours of investigation, it came to light that readelf used to return exit code 0 when no information was found and started returning code 1, which means error in computer language. Going a little further, it was found that this behavior was immediately fixed in version 2.29.1, fixing diffoscopes behavior all together.

In this case, when running version binutils == 2.29.0 we found a regression. Since diffoscope depends on readelf behavior for comparisons, the best we can do is accomodate our software to require binutils != 2.29.0. After making this change and rerunning our test suit, we ensure this regression will not be seen.


As seen here, testing is not perfect and we must accept that. The actual main goal in keeping tests running is to guarantee working compatibility between software versions and platforms, and not for the sake of having tests. We must work towards this goal, while not — intentionaly — breaking userspace.

This post is part of a series about diffoscope development for the Outreachy Project. Diffoscope is a comparison tool, part of the Reproducible Builds effort. In the following weeks, I’ll be writing about development details, techniques and general discussion around diffoscope.

Juliana Oliveira R | Stories by Juliana Oliveira on Medium | 2017-12-21 20:34:54

Let's get to the project I actually applied to:

To build a calendar for FOSS events

We have a page on Debian wiki where we centralize the information needed to make that a reality, you can find it here: SocialEventAndConferenceCalendars

So, in fact, the first thing I did on my internship was:

  • Search for more sources for FOSS events that hadn't been mentioned in that page yet

  • Update said page with these sources

  • Add some attributes for events that I believe could be useful for people wanting to attend them, such as:

    • Is the registration (and not just the CFP) still open?
    • Does the event has a code of conduct?
    • What about accessibility?

I understand that some of these informations might not be readily available for most of the events, but maybe the mere act of mentioning them in our aggregation system may be enough to an organizer to think about them, if they aim to have their event mentioned "by us"?

Both my mentor, Daniel, and I have been looking around to find projects that have worked on a goal similar to this one, to study them and see what can be learned from what has been done already and what can be reused from it. They are mentioned on the wiki page as well. If you know any others, feel free to add there or to let us know!

Among the proposed deliverables for this project:

  • making a plugin for other community web sites to maintain calendars within their existing web site (plugin for Discourse forums, MoinMoin, Drupal, MediaWiki, WordPress, etc) and export it as iCalendar data
  • developing tools for parsing iCalendar feeds and storing the data into a large central database
  • developing tools for searching the database to help somebody find relevant events or see a list of deadlines for bursary applications

My dear mentor Daniel Pocock suggested that I considered working on a plugin for MoinMoinWiki, because Debian and FSFE use MoinMoin for their wikis. I have to admit that I thought that was an awesome idea as soon as I read it, but I was a bit afraid that it would be a very steep learning curve to learn how MoinMoin worked and how I could contribute to it. I'm glad Daniel calmed my fears and reminded me that the mentors are on my side and glad to help!

So, what else have I been doing?

So far? I would say studying! Studying documentation for MoinMoin, studying code that has already been written by others, studying how to plan and to implement this project.

And what have I learned so far?

What is MoinMoin Wiki?

MoinMoin logo, sort of a white "M" inside a circle with light blue background. The corners of the M are rounded and seem connected like nodes

MoinMoin is a wiki written in... Python (YAY! \o/). Let's say that I have... interacted with development on a wiki-like system back when I created my first (and now defunct) blog post-Facebook.

Ikiwiki logo, the first 'iki' is black and mirrors the second one, with a red 'W' in the middle

Ikiwiki was written in Perl, a language I know close to nothing about it, which limited a lot how I could interact with. I am glad that I will be able to work with a language that I am way more familiarized with. (And, on Prof. Masanori's words: "Python is a cool language.")

I also learned that MoinMoin's storage mechanism is based on flat files and folders, rather than a database (I swear that, despite my defense for flat file systems, this is a coincidence. I mean, if you believe in coincidences). I also found out that the development uses Mercurial for version control. I look forward to learning exploring it, because so far I have only used git.

The past few days I set up a local MoinMoin instance. Even though there is a HowTo guide to get MoinMoinWiki working on Debian, I had a little trouble setting it up using it. Mostly because the guide is sort of confusing with permissions, I think? I mean, it says to create a new user with no login, but then it gives commands that can only be executed by root or sudo. That doesn't seen very wise. So I went on and found a docker image for MoinMoin wiki and was able to work on MoinMoin with it. This image is based on Debian Jessie, so maybe that is something that I might work to improve in the future.

Only after I got everything working with docker that I found this page with instructions for Linux which was what I should've tried in the first place, because I didn't really needed fully configurated server with nginx and uwsgi, only a local instance to play with. It happens.

I studied the development guide for MoinMoin and I have also worked to understand the development process (and what Macros, Plugins and such are in this context), so I could figure out where and how to develop!


A macro is entered as wiki markup and it processes a few parameters to generate an output, which is displayed in the content area.

Searching for Macros, I found out there is a Calendar Macro. And I have discovered that, besides the Calendar Macro, there is also an EventCalendar macro that was developed years ago. I expect to use the next few days to study the EventCalendar code more throughly, but the first impression I had is that this code that can be reused and improved for the FOSS calendar.


A parser is entered as wiki markup and it processes a few parameters and a multiline block of text data to generate an output, which is displayed in the content area.


An action is mostly called using the menu (or a macro) and generates a complete HTML page on its own.

So maybe I will have to work a bit on this afterwards, to interact with the macro and customize the information to be displayed? I am not sure, I will have to look more into this.

I guess that is all I have to report for now. See you in two weeks (or less)!

Renata D'Avila | Renata's blog | 2017-12-20 04:20:00


Now that you already know a bit about me, let me start talking about my internship with Outreachy.

One of the steps to apply to the internship is to pick the project you would like to work on. I chose the one with Debian to build a calendar database of social events and conferences.

It is also part of the application process to make some contribution for the project. At first, it wasn't clear to me what contribution would that be (I hadn't found that URL yet), so I went to the #debian-outreach IRC channel and... well, asked, of course. That is when I found the page with a description of the task. I was supposed to learn about the iCalendar format (I didn't even know what it was, back then!) and work on an issue on the github-icalendar project: to use repository labels in one of the suggested ways.

My contribution for github-icalendar

Github-icalendar works by accessing the open issues in all repositories that the user has access to and transforming them into an iCalendar feed of VTODO items.

I chose to solve the labels issue using them to filter the list of issues that should appear in a feed. I imagined two use cases for it:

  1. An user wants to get issues from all their repositories that contain a given label (getting all 'bug' issues, for instance)

  2. An user wants to get issues from only an specific repository that contain a given label.

Therefore, the label system should support both of these uses.

Working on this contribution taught me not only about the icalendar format, but it also gave me hands-on experience on interacting with the Github Issues API.

Back in October, I was able to attend Python Brasil, the national conference about Python, during which I stayed in an accomodation with other PyLadies and allies. I used this opportunity to share what I had developed so far and to get some feedback. That's how I learned about pudb and how to use it to debug my code (and find out where I was getting the Github Issues API wrong). Because I found it so useful, on my Pull-Request, I proposed it to be added to the project, to help with future development. I also started adding some tests and wrote some specifications as suggestions to anyone who keeps working on it.

I would like take this opportunity to thank you to the friends who pointed me in the right direction during the application process and made this internship a reality to me, in particular Elias Dorneles.

Renata D'Avila | Renata's blog | 2017-12-20 02:01:00


I’m Shaily, an Outreachy Round 15 intern at Fedora.

Outreachy is a three month internship program run by Software Freedom Conservancy — a charity that helps promote Free and Open Source Software projects.
The program is aimed at people from groups traditionally underrepresented in tech. Its goal is to “create a positive feedback loop” that supports more people to participate in free and open source software.

Internships are offered twice a year, and I applied for the winter round running from December to March. During this time, I will be working on adding fulltext search to Fedora Hubs — the upcoming communication and collaboration centre for Fedora contributors.

My experience in getting acquainted with Hubs was a breeze, all due to the effort put in by the team in ensuring everything is well documented and there’s an easy path for any willing contributor to better understand the architecture and get started real quick.

This is something that Hubs, when live, will make possible for a lot more projects in how it will help bring most or all the relevant information about them in a consolidated manner in one place — the associated project “hub”.

In the following blog posts, I’ll go over the architecture of the application in more detail and also describe my work as I proceed through this very exciting internship. 😃

Shaily | Stories by Shaily on Medium | 2017-12-19 22:18:30

Me and my friend have designed some logos for Osmocom and Debian Mobcom projects. This is related to Outreachy project.

mobcom02mobcom 01osmocom

This is a work in progress so any critic is welcome 🙂

Kira Obrezkova | My Adventures in Wonderland | 2017-12-19 20:05:57

I recently applied for a job somewhere, and found the initial application process confusing and dismaying.

The reason, I think, is that it was not clear a) if the entire process actually happened, and b) what all I was actually submitting. So, I decided to take a bit of time and add some redesign to make things a little less confusing. I’ve also blurred out the company name for politeness’ sake.

What did it look like at first?

When you look at a job description, you get something like this (with a bright orange ‘apply now’ button that is not visible in this screenshot). This seems fine.

After you click Apply Now, you get an odd sort of thing about your personal data collection. I’m guessing this is because it’s a security company, but it reads all sorts of weird. Whatever, that’s not a huge deal.

Next, you get your first page of the application. I like that they remind you what you’re applying for!

If you upload your resume, your name and email are auto-filled. That’s cool, thanks! When you select ‘Next’, you get this:

Wait. What? We just jumped to questions about my nationality and my affirmative action status? What about my work experience? My education? A cover letter? Did the resume upload skip the need for work and education info? Maybe, let’s keep going.

You might notice (I didn’t at the time) that this button says ‘Submit’, not ‘Next’. I didn’t grab a screenshot (and didn’t want to apply twice), but that’s the end of the application process. It thanks you, and it sends you email confirming your application.

What? I don’t even know for sure what it sent! I don’t know how well it parsed my resume. I have no clue at this point what just happened.

What would I fix?

Ok, so that was all sorts of confusing. Enough so that last night as I was falling asleep, I was distracted by wondering what would help. I considered a progress indicator, as that would at least make the extreme brevity of the application not a surprise. I also wondered if they’d labeled the final button ‘Submit’, which they actually had. (but perhaps ‘Submit Application’ would have been a clearer signal!) Finally, right before I fell asleep, I realized that what I most missed was a summary of what I was about to submit.

So, my version of the first page, with a progress bar added (using their font as detected by What Font and the same color as the next button for the progress indication):

Look! It’s the first step of three!

My version of the second page (which was the last in the previous version) also has a progress bar, and changed the button to say ‘Next’. Not sure why I couldn’t make the carets a little more visible when they are between things. And perhaps I need some sort of ‘completed’ indicator for the first step, like a checkmark.

Still a weird jump, but at least I had a chance to expect it.

Finally, I made the very barest of bones summary page (the progress bar, what one was applying for, and a brief statement about the summary page). I didn’t make the whole page, which means that I didn’t get to include a “Submit Application” button instead of just ‘Submit” or suggest ways to make it easy for people to change things they don’t agree with. The latter seems important, especially if it really is automatically interpreting the resume; perhaps offer inline editing?

Not entirely sure how to end progress bars of this type, but you get the point.


I’m struggling with the visual design part of things, but at least I feel a little better about the weird application process, having “fixed” it (at least in theory).

I’m not sure what happens if you don’t submit a resume in that first page (or if you use linkedin or something instead). It seems like it might be a kindness for them to tell you what submitting your resume (or associating with social media) did for you, so that it’s less confusing when it never asks about jobs or education.

Also, Gravit Designer is a pretty nice tool for this purpose!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-12-17 02:14:03

What is Outreachy?

Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns are paid a stipend of $5,500 and have a $500 travel stipend available to them. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and...

Alisha Aneja | Alisha Aneja | 2017-12-17 00:00:00

KubeCon + CloudNativeCon, North America took place in Austin, Texas from 6th to 8th December. But before that, I stumbled upon this great opportunity by Linux Foundation which would make it possible for me to attend and expand my knowledge about cloud computing, containers and all things cloud native!


I would like to thank the diversity committee members – @michellenoorali ,  @Kris__Nova, @jessfraz , @evonbuelow and everyone (+Wendy West!!) behind this for making it possible for me and others by going extra miles to achieve the greatest initiative for diversity inclusion. It gave me an opportunity to learn from experts and experience the power of Kubernetes.

After travelling 23+ in flight, I was able to attend the pre-conference sessions on 5th December. The day concluded with amazing Empower Her Evening Event where I met some amazing bunch of people! We had some great discussions and food, Thanks

With diversity scholarship recipients at EmpowerHer event (credits – Radhika Nair)

On 6th December, I was super excited to attend Day 1 of the conference, when I reached at the venue, Austin Convention Center, there was a huge hall with *4100* people talking about all things cloud native!

It started with informational KeyNote by Dan Kohn, the Executive Director of Cloud Native Computing Foundation. He pointed out how CNCF has grown over the year, from having 4 projects in 2016 to 14 projects in 2017. From having 1400 Attendees in March 2017 to 4100 Attendees in December 2017. It was really thrilling to know about the growth and power of Kubernetes, which really inspired me to contribute towards this project.

Dan Kohn Keynote talk at KubeCon+CloudNativeCon

It was hard to choose what session to attend because there was just so much going on!! I attended sessions mostly which were beginner & intermediate level. Missed out on the ones which required technical expertise I don’t possess, yet! Curious to know more about other tech companies working on, I made sure I visited all sponsor booths and learn what technology they are building. Apart from that they had cool goodies and stickers, the place where people are labelled at sticker-person or non-sticker-person! 😀

There was a diversity luncheon on 7th December, where I had really interesting conversations with people about their challenges and stories related to technology. I made some great friends at the table and thank you for voting my story as the best story of getting into open source & thank you Samsung for sponsoring this event.

KubeCon + CloudNativeCon was a very informative and hugee event put up by Cloud Native Computing Foundation. It was interesting to know how cloud native technologies have expanded along with the growth of community! Thank you the Linux foundation for this experience! 🙂

Keeping Cloud Native Weird!
Open bar all attendee party! (Where I experienced my first snow fall )


Goodbye Austin!

Urvika Gola | Urvika Gola | 2017-12-15 07:50:49

A to Z on writing a lint against single-use lifetime names in Rust.

We start with a simple example.

fn deref<'x>(v: &'x u32) -> u32 {

fn main() { }

Step 1.

Consider the lifetime 'x . It is only used once throughout. What we are trying to achieve is a lint that warns that 'x is being used only once. So, you can do away with the lifetime binding 'x and proceed ahead as follows.

fn deref(v: &u32) -> u32 {

fn main() { }

Current Progress

warning: lifetime name `'x` only used once
--> $DIR/single_use_lifetimes.rs:12:10
12 | fn deref<'x>(v: &'x u32) -> u32 {
| ^^
note: lint level defined here
--> $DIR/single_use_lifetimes.rs:10:9
10 | #![warn(single_use_lifetime)]
| ^^^^^^^^^^^^^^^^^^^

Step 2.

Looking at a bit more complex examples involving structs like the one below.

struct Foo<'a, 'b> { 
f: &'a u32
g: &'b u32
fn foo<'x, 'y>(foo: Foo<'x, 'y>) -> &'x u32 {

This will produce a lint against 'y and suggest a replacement of 'y with an _ .

fn foo<'x, 'y>(foo: Foo<'x, 'y>) -> &'x u32 {
^^ lifetime name `'y` only used once. Use `_` instead.


Firstly, using _ instead of lifetime names makes it easier to deal with compulsory lifetime declarations in the code. It also makes sure that you don’t need to worry about repeating lifetimes names throughout.

Secondly, this change is a part of the RFC on In-band lifetime bindings. The idea is to eliminate the need for separately binding lifetime parameters in fn definitions and impl headers. Let’s take a look at the example below.

fn outer_lifetime<'outer>(arg: &'outer &Foo) -> &'outer Bar

If 'outer is the only lifetime in use here, you might as well do this.

fn outer_lifetime(arg: &'outer &Foo) -> &'outer Bar

Quoting Niko Matsakis from the PR description on the Github repo

An explicit name like 'a should only be used (at least in a function or impl) to link together two things. Otherwise, you should just use '_ to indicate that the lifetime is not linked to anything.

The next article will explain about the code in detail. Till then, adiós!

Gauri P Kholkar | Stories by GeekyTwoShoes on Medium | 2017-12-11 17:37:40

Some Constraints and Trade-offs In The Design of Network Communications: A Summary

This article distills the content presented in the paper “Some Constraints and Trade-offs In The Design of Network Communications” published in 1975 by E. A. Akkoyunlu et al.

The paper focuses on the inclusion of Interprocess Communication (IPC) primitives and the consequences of doing so. It explores, in particular, the time-out and the insertion property feature described in detail below with respect to distributed systems of sequential processes without system buffering and interrupts.

It also touches upon the two generals problem which states that it’s impossible for two processes to agree on a decision over an unreliable network.


The design of an Interprocess Communication Mechanism (IPCM) can be described by stating the behavior of the system and the required services. The features to be included in the IPCM are very critical as they might be interdependent, hence the design process should begin with a detailed spec. This involves thorough understanding of the consequences of each decision.

The major aim of the paper is to point out the interdependence of the features to be incorporated in the system.

The paper states that at times the incompatibility between features is visible from the start. Yet, sometimes two features which seem completely unrelated end up affecting each other significantly. If the trade-offs involved aren’t explored at the beginning, it might not be possible to include desirable features. Trying to accommodate conflicting features results in messy code at the cost of elegance.

Intermediate Processes:

Let’s suppose a system doesn’t allow indirect communication between processes that cannot establish a connection. The users just care about the logical sender and receiver of the messages: they don’t care what path the messages take or how many processes they travel through to reach their final destination. In such a situation, intermediate processes come to our rescue. They’re not a part of the IPCM but are inserted between two processes that can’t communicate directly through a directory or broker process when the connection is set up. They’re the only ones aware of the indirect nature of communication between the processes.

Centralized vs. Distributed Systems:

Centralized Communication Facility

  1. Has a single agent which is able to maintain all state information related to the communication happening in the system
  2. The agent can also change the state of the system in a well-defined manner

For example, if we consider the IPCM to be the centralized agent, it’ll be responsible for matching the SEND and RECEIVE requests of two processes, transferring data between their buffers and relaying appropriate status to both.

Distributed Communication Facility

  1. No single agent has the complete state information at any time
  2. The IPCM is made of several individual components which coordinate, exchange and work with parts of state information they possess.
  3. A global change can take a considerable amount of time
  4. If one of the components crashes, the activity of other components still interests us

Case 1:

In Figure 1, P1 and P2 are the two communicating processes on different machines over a network with their own IPCMs and P is the interface which enables this, with parts that lie on both machines. P handles the details of the network lines.

If one machine or a communication link crashes, we want the surviving IPCM’s to continue their operation. At least one component should detect a failure and be able to communicate. (In the case of a communication link failure, both ends must know.)

Case 2:

Distributed communication can also happen on the same machine given that there are one or more intermediate processes taking part in the system. In that case, P, P1 and P2 will be processes on the same system with identical IPCMs. P is an intermediate processes which facilitates the communication between P1 and P2.

Transactions between P1 and P2 consist of two steps: P1 to P and P to P2. Normally, the status returned to P1 would reflect the result of the P1 to P transfer, but P1 is interested in the status of the over all transaction from P1 to P2.

One way to deal with this is a delayed status return. The status isn’t sent to the sender immediately after the transaction occurs but only when the sender issues a SEND STATUS primitive. In the example above, after receiving the message from P1, P further sends it to P2, doesn’t send any status to P1 and waits to receive a status from P2. When it receives the appropriate status from P2, it relays it to P1 using the SEND STATUS primitive.

Special Cases of Distributed Facility

This section starts out by stating some facts and reasoning around them.

FACT 0: A perfectly reliable distributed system can be made to behave as a centralized system.

Theoretically, this is possible if:

  1. The state of different components of the system is known at any given time
  2. After every transaction, the status is relayed properly between the processes through their IPCMs using reliable communication.

However, this isn’t possible in practice because we don’t have a perfect reliable network. Hence, the more realistic version of the above fact is:

FACT I: A distributed IPCM can be made to simulate a centralized system provided that:
1. The overall system remains connected at all times, and
2. When a communication link fails, the component IPCM’s that are connected to it know about it, and
3. The mean time between two consecutive failures is large compared to the mean transaction time across the network.

The paper states that if the above conditions are met, we can establish communication links that are reliable enough to simulate a centralized systems because:

  1. There is always a path from the sender to the receiver
  2. Only one copy of an undelivered message will be retained by the system in case of a failure due to link failure detection. Hence a message cannot be lost if undelivered and will be removed from the system when delivered.
  3. A routing strategy and a bound on the failure rate ensures that a message moving around in a subset of nodes will eventually get out in finite time if the target node isn’t present in the subset.

The cases described above are special cases because they make a lot of assumptions, use inefficient algorithms and don’t take into account network partitions leading to disconnected components.

Status in Distributed Systems

Complete Status

A complete status is one that relays the final outcome of the message, such as whether it reached its destination.

FACT 2: In an arbitrary distributed facility, it is impossible to provide complete status.

Case 1:

Assume that a system is partitioned into two disjoint networks, leaving the IPCMs disconnected. Now, if IPCM1 was awaiting a status from IPCM2, there is no way to get it and relay the result to P1.

Case 2:

Consider figure 2, if there isn’t a reliable failure detection mechanism present in the system and IPCM2 sends a status message to IPCM1, then it can never be sure it reached or not without an acknowledgement. This leads to an infinite exchange of messages.


Time-outs are required because the system has finite resources and can’t afford to be deadlocked forever. The paper states that:

FACT 3: In a distributed system with timeouts, it is impossible to provide complete status (even if the system is absolutely reliable).

In figure 3, P1 is trying to send P2 a message through a chain of IPCMs.

Suppose if I1 takes data from P1 but before it hears about the status of the transaction, P1’s request times out. IPCM1 has now knowledge about the final outcome whether the data was successfully received by P2. Whatever status it returns to P1, it may prove to be incorrect. Hence, it’s impossible to provide complete status in a distributed facility with time-outs.

Insertion Property

An IPCM has insertion property if we insert an intermediate process P between two processes P1 and P2 that wish to communicate such that:

  1. P is invisible to both P1 and P2
  2. The status relayed to P1 and P2 is the same they’d get if directly connected
FACT 4: In a distributed system with timeouts, the insertion property can be possessed only if the IPCM withholds some status information that is known to it.

Delayed status is required to fulfill the insertion property. Consider that the message is sent from P1 to P2. What happens if P receives P1’s message, it goes into await-status state but it times out before P could learn about the status?

We can’t tell P1 the final outcome of the exchange as that’s not available yet. We also can’t let P know that it’s in await-status state because that would mean that the message was received by someone. It’s also not possible that P2 never received the data because such a situation cannot arise if P1 and P2 are directly connected & hence violates the insertion property.

The solution to this is to provide an ambiguous status to P1, one that is as likely to be possible if the two processes were connected directly.

Thus, a deliberate suppression of what happened is introduced by providing the same status to cover a time-out which occurs while awaiting status and, say, a transmission error.

Logical and Physical Messages

The basic function of an IPCM is the transfer and synchronization of data between two processes. This may happen by dividing the physical messages originally sent by the sender process as a part of a single operation into smaller messages, also known as logical message for the ease of transfer.

Buffer Size Considerations

As depicted in figure 5, if a buffer mismatch arises, we can take the following approaches to fix it:

  1. Define a system-wide buffer size. This is extremely restrictive, especially within a network of heterogeneous systems.
  2. Satisfy the request with the small buffer size and inform both the processes involved what happened. This approach requires that the processes are aware of the low level details of the communication.
  3. Allow partial transfers. In this approach, only the process that issued the smaller request (50 words) is woken up. All other processes remain asleep awaiting further transfers. If the receiver’s buffer isn’t full, an EOM (End Of Message) indicator is required to wake it up.

Partial Transfers and Well-Known Ports

In figure 6, a service process using a well-known port is accepting requests for sever user processes, P1…Pn. If P1 sends a message to the service process that isn’t complete and doesn’t fill its buffer, we need to consider the following situations:

  1. The well-known port is reserved for P1. No other process can communicate with the service process using it until P1 is done.
  2. When the service process times out while P1 is preparing to send the second and final part of the message, we need to handle it without informing P1 that the first part has been ignored. P1 isn’t listening for incoming messages from the service process.

Since none of these problems arise without partial transfers, one solution is to ban them altogether. For example:

This is the approach taken in ARPANET where the communication to well known ports are restricted to short, complete messages which are used to setup a separate connection for subsequent communication.

Buffer Processes

This solution is modeled around the creation of dynamic processes.

Whenever P1 wishes to transfer data to the service process, a new process S1 is created and receives messages from P1 until the logical message is completed, sleeping as and when required. Then it sends the complete physical message to the service process with EOM flag set. Thus no partial transfers happen between S1 and the service process, they’re all filtered out before that.

However, this kind of a solution isn’t possible with well-known ports. S1 is inserted between P1 and the service process when the connection is initialized. In the case of well-known ports, no initialization takes place.

In discussing status returned to the users, we have indicated how the presence of certain other features limits the information that can be provided.
In fact, we have shown situations in which uncertain status had to be returned, providing almost no information as to the outcome of the transaction.

Even though the inclusion of insertion property complicates things, it is beneficial to use the weaker version of it.

Finally, we list a set of features which may be combined in a working IPCM:
(1) Time-outs
(2) Weak insertion property and partial transfer
(3) Buffer processes to allow
(4) Well-known ports — with appropriate methods to deal with partial transfers to them.

Some Constraints & Trade-offs In The Design of Network Communications: A Summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-10 23:28:04

Let’s try to break down the paper “Building On Quicksand” published by Pat Helland and David Campbell in 2009. All pull quotes are from the paper.

The paper focuses on the design of large, fault-tolerant, replicated distributed systems. It also discusses its evolving based on changing requirements over time. It starts off by stating “Reliable systems have always been built out of unreliable components”.

As the granularity of the unreliable component grows (from a mirrored disk to a system to a data center), the latency to communicate with a backup becomes unpalatable. This leads to a more relaxed model for fault tolerance. The primary system will acknowledge the work request and its actions without waiting to ensure that the backup is notified of the work. This improves the responsiveness of the system because the user is not delayed behind a slow interaction with the backup.

Fault-tolerant systems can be made of many components. Their goal is keep functioning when one of those components fail. We don’t consider Byzantinefailures in this discussion. Instead, the fail fast model where either a component works correctly or it fails.

The paper goes on to compare two versions of the Tandem NonStop system. One that used synchronous checkpointing and one that used asynchronous checkpointing. Refer section 3 of the paper for all the details. I’d like to touch upon the difference between the two checkpointing strategies.

  • Synchronous checkpointing: in this case, with every write to the primary, state needed to be sent to the backup. Only after the backup acknowledged the write, did the primary send a response to the client who issued the write request. This ensured that when the primary fails, the backup can take over without losing any work.
  • Asynchronous checkpointing: in this strategy, the primary acknowledges and commits the write. This is done as soon as it processes it without waiting for a reply from the backup. This technique has improved latency but also poses other challenges addressed later.

Log Shipping

A classic database system has a process that reads the log and ships it to a backup data-center. The normal implementation of this mechanism commits transactions at the primary system (acknowledging the user’s commit request) and asynchronously ships the log. The backup database replays the log, constantly playing catch-up.

The mechanism described above is termed as log shipping. The main problem this poses is that when the primary fails and the back up takes over, some recent transactions might be lost.

This inherently opens up a window in which the work is acknowledged to the client but it has not yet been shipped to the backup. A failure of the primary during this window will lock the work inside the primary for an unknown period of time. The backup will move ahead without knowledge of the locked-up work.

The introduction of asynchrony into the system has an advantage in latency, response time and performance. However, it makes the system more prone to the possibility of losing work when the primary fails. There are two ways to deal with this:

  1. Discard the work locked in the primary when it fails. Whether a system can do that or not depends on the requirements and business rules.
  2. Have a recovery mechanism to sync the primary with backups when it comes back up and retries lost work. This is possible only if the operations can be retried in an idempotent way and the out-of-order retries are possible.

The system loses the notion of what the authors call “an authoritative truth”. Nobody knows the accurate state of the system at any given point in time if the work is locked in an unavailable backup or primary.

The authors conclude that business rules in a system with asynchronous checkpointing are probabilistic.

If a primary uses asynchronous checkpointing and applies a business rule on the incoming work, it is necessarily a probabilistic rule. The primary, despite its best intentions, cannot know it will be alive to enforce the business rules.
When the backup system that participates in the enforcement of these business rules is asynchronously tied to the primary, the enforcement of these rules inevitably becomes probabilistic!

The authors state that commutative operations, operations that can be reordered, can be executed independently, as long as the operation preserves business rules. However, this is hard to do with storage systems because the write operation isn’t commutative.

Another consideration is that work of a single operation is idempotent. For example, executing the operation any number of time should result in the same state of the system.

To ensure this, applications typically assign a unique number or ID to the work. This is assigned at the ingress to the system (i.e. whichever replica first handles the work). As the work request rattles around the network, it is easy for a replica to detect that it has already seen that operation and, hence, not do the work twice.

The authors suggest that different operations within a system provide different consistency guarantees. Yet, this depends on the business requirements. Some operations can choose classic consistency over availability and vice versa.

Next, the authors argue that as soon there is no notion of authoritative truth in a system. All computing boils down to three things: memories, guesses, and apologies.

  1. Memories: you can only hope that your replica remembers what it has already seen.
  2. Guesses: Due to only partial knowledge being available, the replicas take actions based on local state and may be wrong. “In any system which allows a degradation of the absolute truth, any action is, at best, a guess.” Any action in such a system has a high probability of being successful, but it’s still a guess.
  3. Apologies: Mistakes are inevitable. Hence, every business needs to have an apology mechanism in place either through human intervention or by automating it.

The paper next discusses the topic of eventual consistency. The authors do this by taking the Amazon shopping cart built using Dynamo & a system for clearing checks as examples. A single replica identifies and processes the work coming into these systems. It flows to other replicas as and when connectivity permits. The requests coming into these systems are commutative (reorderable). They can be processed at different replicas in different orders.

Storage systems alone cannot provide the commutativity we need to create robust systems that function with asynchronous checkpointing. We need the business operations to reorder. Amazon’s Dynamo does not do this by itself. The shopping cart application on top of the Dynamo storage system is responsible for the semantics of eventual consistency and commutativity. The authors think it is time for us to move past the examination of eventual consistency in terms of updates and storage systems. The real action comes when examining application based operation semantics.

Next, they discuss two strategies for allocating resources in replicas that might not be able to communicate with each other:

  1. Over-provisioning: the resources are partitioned between replicas. Each has a fixed subset of resources they can allocate. No replica can allocate a resource that’s not actually available.
  2. Over-booking: the resources can be individually allocated without ensuring strict partitioning. This may lead to the replicas allocating a resource that’s not available, promising something they can’t deliver.

The paper talks also about something termed as the “seat reservation pattern”. This is a compromise between over-provisioning and over-booking:

Anyone who has purchased tickets online will recognize the “Seat Reservation” pattern where you can identify potential seats and then you have a bounded period of time, (typically minutes), to complete the transaction. If the transaction is not successfully concluded within the time period, the seats are once again marked as “available”.

ACID 2.0

The classic definition of ACID stands for “Atomic, Consistent, Isolated, and Durable”. Its goal is to make the application think that there is a single computer which isn’t doing anything else when the transaction is being processed. The authors talk about a new definition for ACID. which stands for Associative, Commutative, Idempotent, and Distributed.

The goal for ACID2.0 is to succeed if the pieces of the work happen: At least once, anywhere in the system, in any order. This defines a new KIND of consistency. The individual steps happen at one or more system. The application is explicitly tolerant of work happening out of order. It is tolerant of the work happening more than once per machine, too.

Going by the classic definition of ACID, a linear history is a basis for fault tolerance. If we want to achieve the same guarantees in a distributed system, it’ll require concurrency control mechanisms which “tend to be fragile”.

When the application is constrained to the additional requirements of commutativity and associativity, the world gets a LOT easier. No longer must the state be checkpointed across failure units in a synchronous fashion. Instead, it is possible to be very lazy about the sharing of information. This opens up offline, slow links, low-quality datacenters, and more.

In conclusion:

We have attempted to describe the patterns in use by many applications today as they cope with failures in widely distributed systems. It is the reorderability of work and repeatability of work that is essential to allowing successful application execution on top of the chaos of a distributed world in which systems come and go when they feel like it.

P.S. — If you made it this far and would like to receive a mail whenever I publish one of these posts, sign up here.

Building on Quicksand: A Summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-08 21:19:06


    My name is Sandhya Bankar, recently finished my internship with The Linux Foundation as Outreachy intern.

I am Open Source enthusiast who is passionate about to learn and explore the Linux kernel. It is a fun to work and enjoy bit more. 
I was the Linux kernel Interned through Outreachy in Round 13. I have just completed Outreachy Internship (https://wiki.gnome.org/Outreachy/2016/DecemberMarch). Outreachy is hosted by Software Freedom Conservancy with special support from Red Hat and the GNOME Foundation for Women in technology to support women participating in free and open source software since contributors to free and open source projects have mostly been men.

I have been selected for the project "radix tree __alloc_fd" under the guidance of mentor Matthew Wilcox and Rik Van Riel. I got amazing support through the mentor for the __alloc_fd project.
Specific about the project, I have worked for patchset of the IDR (integer ID management). IDR is like radix tree structure. In this currently converting file allocation code to use the IDR. The file descriptors are allocated using a custom allocator. So patchset of this will replaces the custom code with an IDR. This replacement will result in some memory saving for processes with relatively few open files and improve the performance of workloads with very large numbers of open files. The link to submitted patch is


I have completed my Master of Engineering in Computer Networking and Bachelor of Engineering in Electronics engineering with distinction. I have done P.G. Diploma course in embedded system and VLSI design from CDAC Pune., Maharashtra, India. My initial preference is C programming through the data structure and algorithms. Extensive experience of Patch sending through git via mutt.

Other than technical studies I love reading books. I also like to spend time on outdoor games.

Please do visit my LinkedIn Profile,


Sandhya Babanrao Bankar | Kernel Stuff | 2017-12-07 23:24:39

So this is a blog post about some songs that have been stuck in my head lately, that are on my playlist, and that I found myself thinking about why I enjoy them.
Without further adieu, here they are.

1. Both Sides Now - Joni Mitchell

A beautiful musing on love. The experience and wisdom the years implied in the song's lyrics come through in Joni's voice and in the accompanying instrumentation of the song. Lovely for when you just want to reflect.

2. LoveStoned/I Think She Knows - Justin Timberlake

When I was younger, I was all about how much noise a song had. The upbeat tempo that kept you dancing. So the 'I Think She Knows' interlude didn't interest me; it pretty much went over my head. Listening to the song now, I agree with John Mayer; the 45 seconds or so of 'I Think She Knows' really are sonic bliss. At least to me. Your mileage may vary 😎.

3. She Has No Time - Keane

A lovely song for when you wonder so much about what could have been that your heart aches and you almost want to curse the day you met that someone special. 

"You think your days are uneventful / And no one ever thinks about you ", "You think your days are ordinary / And no one ever thinks about you / But we're all the same" 

We all feel this way at some point. We think we must be boring, unexciting, uninteresting, somehow inherently below par. Not good enough. The closing line reminds us that before we throw that pity party, it's not just us who feel strangely uninspiring; we all do from time to time. It isn't the reason why someone would leave. 

"Think about the lonely people / Then think about the day she found you / Or lie to yourself / And see it all dissolve around you" 

Remember that there was a 'before', a moment when you'd give anything to feel what you felt, to have that experience of cherishing someone however brief. They also tell you that you have a choice: you can choose to be grateful, to keep the memory of that moment as something wonderful that happened and you are happy that it happened, or you can ruin it with sour grapes, and say to yourself: "It didn't matter much" 

What you had was beautiful, accept it and move on. 

4. Happen - Emeli Sande

I remember when I first listened to the song, Emeli's soaring vocals caught me offguard and made me realize that I had missed half the song already. I think of this song as poetry made music, if that makes sense 😉. 

There are a million more dreams being dreamt tonight / But somehow this one feels like it just might / Happen / Happen to me

I won't try to explain. Just listen to the song.  

5. Same Ol' Mistakes (cover, orig. by Tame Impala) - Rihanna


I typically try to listen to the original of a song, if I like the cover. I make an exception for Rihanna because she does a good job with this song. Honestly I could unpack this song for days. So below are my favourite (lyrical) bits.

I know you don't think it's right / I know that you think it's fake / Maybe fake's what I like / The point is I have the right

Very true. 'Nuff said :D

Man I know that it's hard to digest / But maybe your story / Ain't so different from the rest / And I know it seems wrong to accept / But you've got your demons / And she's got her regrets
Man I know that it's hard to digest / A realization is as good as a guess / And I know it seems wrong to accept / But you've got your demons / And she's got her regrets..

It is hard to take when you're made to consider that maybe, just maybe, you're not alone in what you feel. That it has been done to death by other people. And the only reason why you think it is special, is because it's your first time or you have too much hope or whatever. Sooner or later, baggage comes into the picture. What you were running from catches up with you. The circle of life, or karma, if you will.

Leni Kadali | Memoirs Of A Secular Nun | 2017-12-07 20:57:35

Making Hamburger menu functional, Attending Wikimania’17, Continuing the work on enhancing usability of the dashboard on mobile devices.

Week 9

Completion of refactoring Navigation bar Haml code to React, worked on resolving some unwanted behavior. Added links to the hamburger menu and made the required styling changes.

Active Pull Request:

[WIP] Hamburger menu by ragesoss · Pull Request #1332 · WikiEducationFoundation/WikiEduDashboard

Week 10 & 11 — Wikimania 2017 fever

Attending Wikimania’17. Making hamburger-menu work on Chrome and Firefox. Successful meeting with Jonathan to understand documentation format of user testings done during the internship. Traveling and post Wikimania Blogging.

Link to the blog posts.

Week 12

Made the Hamburger menu cross browser compatible and added styling to dashboard pages for making them mobile friendly using CSS media queries.

Screen shots of the Mobile Layout

Hamburger menu in action.

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-12-06 09:32:30

UX folks may be in the best position to identify ethical issues in their companies. Should it be their responsibility?

This is the final piece of the story I’ve been telling. It started with an explanation of some of the problems currently present in the implementation of UX practices. I then described various ethical problems in technology companies today.

I will now explain how UX folks are uniquely situated to notice ethical concerns. I will also explain how, despite their unique perspective, I do not think that UX folks should be the gatekeepers of ethics. Much like UX itself, ethical considerations are too likely to be ignored without buy-in from the top levels of a company.

Ethics and UX

Ethics and user experience are tied together for a few reasons:

  • Folks who are working on the user experience of a piece of software will often have a good view on the ethics of it — if they stop to consider it.
  • UX folks are trained to see the impact of a product on people’s lives. We are a bridge between software and humans, and ethical concerns are also in that space.
  • Like UX, ethics needs buy-in throughout the company. It can otherwise be difficult or impossible to enforce, as ethical considerations can be at odds with short-term company priorities like shareholder profits or introducing convenient (but potentially problematic) features.

Given that UX folks are in a great position to see ethical problems as they come up, it may be tempting to suggest that we should be the ones in charge of ethics. Unfortunately, as I described in an earlier section, many UX folks are already struggling to get buy-in for their UX work. Without buy-in at the top level, we are unlikely to have the power to do anything about it, and may risk our jobs and livelihoods.

This is made worse by the fact that there are a lot of new UX folks in the Boston area. If they are on the younger side of things, they may not realize that they are being asked to do the impossible, or that they can push back. New UXers may also have taken out student loans, whether as an undergraduate student or to enable a career change into UX, thereby effectively becoming indentured servants who can’t even use bankruptcy to escape them.

Even new and career-changer UX folks who have not taken out loans can feel like they can’t afford to annoy the company they’re working for. Given how few entry-level jobs there are — at least in the Boston area — it’s a huge risk for someone new to UX to be taking.

The risk of pointing out ethical problems is even worse when you are talking about an ethnic minority or others who are in an especially vulnerable position, and who may also be more likely to notice potential problem-areas.

Individual UX folks should not be the sole custodians of ethics nor of the commitment to a better user experience. Without buy-in at high levels of the company, neither of these are likely to work out well for anyone.

Who should be in charge of software ethics?

Who, then, should be the custodians of keeping software from causing harm?

The UXPA Professional Organization

The UXPA organization has a code of conduct, which is excellent. Unfortunately, it doesn’t really have much to do with the ethical concerns that have come up lately. At best, we have the lines “UX practitioners shall never knowingly use material that is illegal, immoral, or which may hurt or damage a person or group of people.” and “UX practitioners shall advise clients and employers when a proposed project is not in the client’s best interest and provide a rationale for this advice.” However, these are relevant to the problem at hand only if a UX practitioner can tell that something might cause harm, or if a client’s best interest matches up with the public’s best interest.

The code of conduct in question may not be specific enough, either: the main purpose of such a code of conduct is to offer practitioners a place to refer to when something goes against it. It is not clear that this code offers that opportunity, nor is it really a UX professional’s job to watch for ethics concerns. We may be best positioned, and we may be able to learn what to look for, but ethical concerns are only a part of the many tasks a UX professional may have.

Companies Themselves

A better question might be: how do we encourage companies adopt and stick to an ethics plan around digital products? Once something like that is in place, it becomes a _lot_ easier for your employees to take that into account. Knowing what to pay attention to, what areas to explore, and taking the time to do so would be a huge improvement.

Maybe instead of asking UX folks to be the custodians of ethics (also here), we can encourage companies to pay attention to this problem. UX folks could certainly work with and guide their companies when those companies are looking to be more ethically conscious.

I’m not at all certain what might get companies to pay attention to ethics, except possibly for things like the current investigation into the effects of Russian interference in our politics. When it’s no longer possible to hide the evil that one’s thoughtlessness — or one’s focus on money over morals — has caused, maybe that will finally get companies to implement and enforce clear, ethical guidelines.

What do you think?

What are your thoughts on how — or even if — ethics should be brought to the table around high tech?

Thank you to Alex Feinman and Emily Lawrence for their feedback on this entry!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-12-06 00:19:29

Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the Django Girls tutorial, which covers that and more.

This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account).

The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name.

Screenshot of Github, the menu to create a new repository is open and a new repo is being created with the name 'rsip22.github.io'

I recommend that you initialize your repository with a README, with a .gitignore for Python and with a free software license. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it.

Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine:

$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git

And change to the new directory:

$ cd YOUR_USERNAME.github.io

Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source":

$ git checkout -b source

Create the virtualenv with the Python3 version installed on your system.

On GNU/Linux systems, the command might go as:

$ python3 -m venv venv

or as

$ virtualenv --python=python3.5 venv

And activate it:

$ source venv/bin/activate

Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this:

(venv)$ pip install pelican markdown ghp-import

Once that is done, you can start creating your blog using pelican-quickstart:

(venv)$ pelican-quickstart

Which will prompt us a series of questions. Before answering them, take a look at my answers below:

> Where do you want to create your new web site? [.] ./
> What will be the title of this web site? Renata's blog
> Who will be the author of this web site? Renata
> What will be the default language of this web site? [pt] en
> Do you want to specify a URL prefix? e.g., http://example.com   (Y/n) n
> Do you want to enable article pagination? (Y/n) y
> How many articles per page do you want? [10] 10
> What is your time zone? [Europe/Paris] America/Sao_Paulo
> Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!**
> Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n
> Do you want to upload your website using FTP? (y/N) n
> Do you want to upload your website using SSH? (y/N) n
> Do you want to upload your website using Dropbox? (y/N) n
> Do you want to upload your website using S3? (y/N) n
> Do you want to upload your website using Rackspace Cloud Files? (y/N) n
> Do you want to upload your website using GitHub Pages? (y/N) y
> Is this your personal page (username.github.io)? (y/N) y
Done. Your new project is available at /home/username/YOUR_USERNAME.github.io

About the time zone, it should be specified as TZ Time zone (full list here: List of tz database time zones).

Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes

Title: My first post
Date: 2017-11-26 10:01
Modified: 2017-11-27 12:30
Category: misc
Tags: first, misc
Slug: My-first-post
Authors: Your name
Summary: What does your post talk about? Write here.

This is the *first post* from my Pelican blog. **YAY!**

Let's see how it looks?

Go to the terminal, generate the static files and start the server. To do that, use the following command:

(venv)$ make html && make serve

While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar.

Screenshot of the blog home. It has a header with the title Renata\'s blog, the first post on the left, info about the post on the right, links and social on the bottom.

Pretty neat, right?

Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes

STATIC_PATHS = ['images']

Save it. Go to your post and add the image this way:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes

![Write here a good description for people who can't see the image]({filename}/images/IMAGE_NAME.jpg)

You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how?

(venv)$ make html && make serve

One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes

FEED_ALL_ATOM = 'feeds/all.atom.xml'
FEED_ALL_RSS = 'feeds/all.rss.xml'
AUTHOR_FEED_RSS = 'feeds/%s.rss.xml'

Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password.

$ git add -A && git commit -a -m 'first commit' && git push --all

And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them:

$ make github

You will be asked for your Github login and password again. And... voilà! Your new blog should be live on https://YOUR_USERNAME.github.io.

If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community.

For tips on how to write your posts using Markdown, you should read the Daring Fireball Markdown guide.

To get other themes, I recommend you visit Pelican Themes.

This post was adapted from Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme. I hope it was somewhat useful for you.

Renata D'Avila | Renata's blog | 2017-12-05 22:30:00

This year WineConf was held in Wroclaw, Poland, October 28-29. I gave a presentation about my work on the AppDB on the second day of the conference.

My AppDB work focused on two broad areas, the admin controls and the test report form. The former is the most important to me, but is the least visible to others, so I enjoyed having the chance to show off some of the improvements I had made to that area.

The bulk of my talk focused on changes to the the test report form, including a demonstration of how the form now works and a discussion of my rationale for making the changes. I also talked a little about my future plans for that form, which includes making it more responsive using javascript

I ended the talk with a few stats from the new fields that I had added to the test report form:


June 5, 2017--October 23, 2017


August 16, 2017--October 23, 2017


August 25, 2017--October 23, 2017


The changes haven't been live for very long, so the data is limited. Next year, with a bigger sample to look at, I'll be able to do more detailed breakdowns, such as a breakdown of ratings across GPU/drivers. 

Rosanne DiMesio | Notes from an Internship | 2017-12-05 18:55:04

Distributed Computing in a nutshell: How distributed systems work

This post distills the material presented in the paper titled “A Note on Distributed Systems” published in 1994 by Jim Waldo and others.

The paper presents the differences between local and distributed computing in the context of Object Oriented Programming. It explains why treating them the same is incorrect and leads to applications that aren’t robust or reliable.


The paper kicks off by stating that the current work in distributed systems is modeled around objects — more specifically, a unified view of objects. Objects are defined by their supported interfaces and the operations they support.

Naturally, this can be extended to imply that objects in the same address space, or in a different address space on the same machine, or on a different machine, all behave in a similar manner. Their location is an implementation detail.

Let’s define the most common terms in this paper:

Local Computing

It deals with programs that are confined to a single address space only.

Distributed Computing

It deals with programs that can make calls to objects in different address spaces either on the same machine or on a different machine.

The Vision of Unified Objects

Implicit in this vision is that the system will be “objects all the way down.” This means that all current invocations, or calls for system services, will eventually be converted into calls that might be made to an object residing on some other machine. There is a single paradigm of object use and communication used no matter what the location of the object might be.

This refers to the assumption that all objects are defined only in terms of their interfaces. Their implementation also includes location of the object, and is independent of their interfaces and hidden from the programmer.

As far the programmer is concerned, they write the same type of call for every object, whether local or remote. The system takes care of sending the message by figuring out the underlying mechanisms not visible to the programmer who is writing the application.

The hard problems in distributed computing are not the problems of how to get things on and off the wire.

The paper goes on to define the toughest challenges of building a distributed system:

  1. Latency
  2. Memory Access
  3. Partial failure and concurrency

Ensuring a reasonable performance while dealing with all the above doesn’t make the life of the a distributed systems engineer any easier. And the lack of any central resource or state manager adds to the various challenges. Let’s observe each of these one by one.


This is the fundamental difference between local and distributed object invocation.

The paper claims that a remote call is four to five times slower than a local call. If the design of a system fails to recognize this fundamental difference, it is bound to suffer from serious performance problems. Especially if it relies on remote communication.

You need to have a thorough understanding of the application being designed so you can decide which objects should be kept together and which can be placed remotely.

If the goal is to unify the difference in latency, then we’ve two options:

  • Rely on the hardware to get faster with time to eliminate the difference in efficiency
  • Develop tools which allow us to visualize communication patterns between different objects and move them around as required. Since location is an implementation detail, this shouldn’t be too hard to achieve


Another difference that’s very relevant to the design of distributed systems is the pattern of memory access between local and remote objects. A pointer in the local address space isn’t valid in a remote address space.

We’re left with two choices:

  • The developer must be made aware of the difference between the access patterns
  • To unify the differences in access between local and remote access, we need to let the system handle all aspects of access to memory.

There are several way to do that:

  • Distributed shared memory
  • Using the OOP (Object-oriented programming) paradigm, compose a system entirely of objects — one that deals only with object references.
    The transfer of data between address spaces can be dealt with by marshalling and unmarshalling the data by the layer underneath. This approach, however, makes the use of address-space-relative pointers obsolete.

The danger lies in promoting the myth that “remote access and local access are exactly the same.” We should not reinforce this myth. An underlying mechanism that does not unify all memory accesses while still promoting this myth is both misleading and prone to error.

It’s important for programmers to be made aware of the various differences between accessing local and remote objects. We don’t want them to get bitten by not knowing what’s happening under the covers.

Partial failure & concurrency

Partial failure is a central reality of distributed computing.

The paper argues that both local and distributed systems are subject to failure. But it’s harder to discover what went wrong in the case of distributed systems.

For a local system, either everything is shut down or there is some central authority which can detect what went wrong (the OS, for example).

Yet, in the case of a distributed system, there is no global state or resource manager available to keep track of everything happening in and across the system. So there is no way to inform other components which may be functioning correctly which ones have failed. Components in a distributed system fail independently.

A central problem in distributed computing is insuring that the state of the whole system is consistent after such a failure. This is a problem that simply does not occur in local computing.

For a system to withstand partial failure, it’s important that it deals with indeterminacy, and that the objects react to it in a consistent manner. The interfaces must be able to state the cause of failure, if possible. And then allow the reconstruction of a “reasonable state” in case the cause can’t be determined.

The question is not “can you make remote method invocation look like local method invocation,” but rather “what is the price of making remote method invocation identical to local method invocation?”

Two approaches come to mind:

  1. Treat all interfaces and objects as local. The problem with this approach is that it doesn’t take into account the failure models associated with distributed systems. Therefore, it’s indeterministic by nature.
  2. Treat all interfaces and objects as remote. The flaw with this approach is that it over-complicates local computing. It adds on a ton of work for objects that are never accessed remotely.

A better approach is to accept that there are irreconcilable differences between local and distributed computing, and to be conscious of those differences at all stages of the design and implementation of distributed applications.

P.S. — If you made it this far and would like to receive a mail whenever I publish one of these posts, sign up here.

A Note on Distributed Systems was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-04 17:16:43

This article will distill the contents of the academic paper Viewstamped Replication Revisited by Barbara Liskov and James Cowling. All quotations are taken from that paper.

It presents an updated explanation of Viewstamped Replication, a replication technique that handles failures in which nodes crash. It describes how client requests are handled, how the group reorganizes when a replica fails, and how a failed replica is able to rejoin the group.


The Viewstamped Replication protocol, referred to as VR, is used for replicated services that run on many nodes known as replicas. VR uses state machine replication: it maintains state and makes it accessible to the clients consuming that service.

Some features of VR:

  • VR is primarily a replication protocol, but it provides consensus too.
  • VR doesn’t use any disk I/O — it uses replicated state for persistence.
  • VR deals only with crash failures: a node is either functioning or it completely stops.
  • VR works in an asynchronous network like the internet where nothing can be concluded about a message that doesn’t arrive. It may be lost, delivered out of order, or delivered many times.

Replica Groups

VR ensures reliability and availability when no more than a threshold of f replicas are faulty. It does this by using replica groups of size 2f + 1; this is the minimal number of replicas in an asynchronous network under the crash failure model.

We can provide a simple proof for the above statement: in a system with f crashed nodes, we need at least the majority of f+1 nodes that can mutually agree to keep the system functioning.

A group of f+1 replicas is often known as a quorum. The protocol needs the quorum intersection property to be true to work correctly. This property states that:

The quorum of replicas that processes a particular step of the protocol must have a non-empty intersection with the group of replicas available to handle the next step, since this way we can ensure that at each next step at least one participant knows what happened in the previous step.


VR architecture

The architecture of VR is as follows:

  1. The user code is run on client machines on top of a VR proxy.
  2. The proxy communicates with the replicas to carry out the operations requested by the client. It returns the computed results from the replicas back to the client.
  3. The VR code on the side of the replicas accepts client requests from the proxy, executes the protocol, and executes the request by making an up-call to the service code.
  4. The service code returns the result to the VR code which in turn sends a message to the client proxy that requested the operation.


The challenge for the replication protocol is to ensure that operations execute in the same order at all replicas in spite of concurrent requests from clients and in spite of failures.

If all the replicas should end in the same state, it is important that the above condition is met.

VR deals with the replicas as follows:

Primary: Decides the order in which the operations will be executed

Secondary: Carries out the operations in the same order as selected by the primary

What if the primary fails?

  • VR allows different replicas to assume the role of primary if it fails over time.
  • The system moves through a series of views. In each view, one replica assumes the role of primary.
  • The other replicas watch the primary. If it appears to be faulty, then they carry out a view-change to select a new primary.

We consider the following three scenarios of the VR protocol:

  • Normal case processing of user requests
  • View changes to select a new primary
  • Recovery of a failed replica so that it can rejoin the group

VR protocol

State of VR at a replica

The state maintained by each replica is presented in the figure above. Some points to note:

  • The identity of the primary isn’t stored but computed using the view number and the configuration.
  • The replica with the smallest IP is replica 1 and so on.

The client side proxy also maintains some state:

  • It records the configuration.
  • It records the current view number to track the primary.
  • It has a client id and an incrementing client request number.

Normal Operation

  • Replicas participate in processing of client requests only when their status is normal.
  • Each message sent contains the sender’s view number. Replicas process only those requests which have a view number that matches what they know. If the sender replica is ahead, it drops the message. If it’s behind, it performs a state transfer.
Normal mode operation

The normal operation of VR can be broken down into the following steps:

  1. The client sends a REQUEST message to the primary asking it to perform some operation, passing it the client-id and the request number.
  2. The primary cross-checks the info present in the client table. If the request number is smaller than the one present in the table, it discards it. It re-sends the response if the request was the most recently executed one.
  3. The primary increases the op-number, appends the request to its log, and updates the client table with the new request number. It sends a PREPARE message to the replicas with the current view-number, the operation-number, the client’s message, and the commit-number (the operation number of the most recently committed operation).
  4. The replicas won’t accept a message with an op-number until they have all operations preceding it. They use state transfer to catch up if required. Then they add the operation to their log, update the client table, and send a PREPAREOK message to the primary. This message indicates that the operation, including all the preceding ones, has been prepared successfully.
  5. The primary waits for a response from f replicas before committing the operation. It increments the commit-number. After making sure all operations preceding the current one have been executed, it makes an up-call to the service code to execute the current operation. A REPLY message is sent to the client containing the view-number, request-number, and the result of the up-call.

Usually the PREPARE message is used to inform the backup replicas of the committed operations. It can also do so by sending a COMMIT message.

To execute a request, a backup has to make sure that the operation is present in its log and that all the previous operations have been executed. Then it executes the said operation, increments its commit-number, and updates the client’s entry in the client-table. But it doesn’t send a reply to the client, as the primary has already done that.

If a client doesn’t receive a timely response to a request, it re-sends the request to all replicas. This way if the group has moved to a later view, its message will reach the new primary. Backups ignore client requests; only the primary processes them.

View change operation

Backups monitor the primary: they expect to hear from it regularly. Normally the primary is sending PREPARE messages, but if it is idle (due to no requests) it sends COMMIT messages instead. If a timeout expires without a communication from the primary, the replicas carry out a view change to switch to a new primary.

There is no leader election in this protocol. The primary is selected in a round robin fashion. Each member has a unique IP address. The next primary is the backup replica with the smallest IP that is functioning. Each number in the group is already aware of who is expected to be the next primary.

Every executed operation at the replicas must survive the view change in the order specified when it was executed. The up-call is carried out at the primary only after it receives f PREPAREOK messages. Thus the operation has been recorded in the logs of at least f+1 replicas (the old primary and f replicas).

Therefore the view change protocol obtains information from the logs of at least f + 1 replicas. This is sufficient to ensure that all committed operations will be known, since each must be recorded in at least one of these logs; here we are relying on the quorum intersection property. Operations that had not committed might also survive, but this is not a problem: it is beneficial to have as many operations survive as possible.
  1. A replica that notices the need for a view change advances its view-number, sets its status to view-change, and sends a START-VIEW-CHANGE message. A replica identifies the need for a view change based on its own timer, or because it receives a START-VIEW-CHANGE or a DO-VIEW-CHANGE from others with a view-number higher than its own.
  2. When a replica receives f START-VIEW-CHANGE messages for its view-number, it sends a DO-VIEW-CHANGE to the node expected to be the primary. The messages contain the state of the replica: the log, most recent operation-number and commit-number, and the number of the last view in which its status was normal.
  3. The new primary waits to receive f+1 DO-VIEW-CHANGE messages from the replicas (including itself). Then it updates its state to the most recent based on the info from replicas (see paper for all rules). It sets its number as the view-number in the messages, and changes its status to normal. It informs all other replicas by sending a STARTVIEW message with the most recent state including the new log, commit-number and op-number.
  4. The primary can now accept client requests. It executes any committed operations and sends the replies to clients.
  5. When the replicas receive a STARTVIEW message, they update their state based on the message. They send PREPAREOK messages for all uncommitted operations present in their log after the update. They execute these operations to to be in sync with the primary.

To make the view change operation more efficient, the paper describes the following approach:

The protocol described has a small number of steps, but big messages. We can make these messages smaller, but if we do, there is always a chance that more messages will be required. A reasonable way to get good behavior most of the time is for replicas to include a suffix of their log in their DO-VIEW-CHANGE messages. The amount sent can be small since the most likely case is that the new primary is up to date. Therefore sending the latest log entry, or perhaps the latest two entries, should be sufficient. Occasionally, this information won’t be enough; in this case the primary can ask for more information, and it might even need to first use application state to bring itself up to date.


When a replica recovers after a crash it cannot participate in request processing and view changes until it has a state at least as recent as when it failed. If it could participate sooner than this, the system can fail.

The replica should not “forget” anything it has already done. One way to ensure this is to persist the state on disk — but this will slow down the whole system. This isn’t necessary in VR because the state is persisted at other replicas. It can be obtained by using a recovery protocol provided that the replicas are failure independent.

When a node comes back up after a crash it sets its status to recovering and carries out the recovery protocol. While a replica’s status is recovering it does not participate in either the request processing protocol or the view change protocol.

The recovery protocol is as follows:

  1. The recovering replica sends a RECOVERY message to all other replicas with a nonce.
  2. Only if the replica’s status is normal does it reply to the recovering replica with a RECOVERY-RESPONSE message. This message contains its view number and the nonce it received. If it’s the primary, it also sends its log, op-number, and commit-number.
  3. When the replica has received f+1 RECOVERY-RESPONSE messages, including one from the primary, it updates its state and changes its status to normal.
The protocol uses the nonce to ensure that the recovering replica accepts only RECOVERY-RESPONSE messages that are for this recovery and not an earlier one.


Reconfiguration deals with epochs. The epoch represents the group of replicas processing client requests. If the threshold for failures, f, is adjusted, the system can either add or remove replicas and transition to a new epoch. It keeps track of epochs through the epoch-number.

Another status, namely transitioning, is used to signify that a system is moving between epochs.

The approach to handling reconfiguration is as follows. A reconfiguration is triggered by a special client request. This request is run through the normal case protocol by the old group. When the request commits, the system moves to a new epoch, in which responsibility for processing client requests shifts to the new group. However, the new group cannot process client requests until its replicas are up to date: the new replicas must know all operations that committed in the previous epoch. To get up to date they transfer state from the old replicas, which do not shut down until the state transfer is complete.

The VR sub protocols need to be modified to deal with epochs. A replica doesn’t accept messages from an older epoch compared to what it knows, such as those with an older epoch-number. It informs the sender about the new epoch.

During a view-change, the primary cannot accept client requests when the system is transitioning between epochs. It does this by checking if the topmost request in its log is a RECONFIGURATION request. A recovering replica in an older epoch is informed of the epoch if it is part of the new epoch or if it shuts down.

The issue that comes to mind is that the client requests can’t be served while the system is moving to a new epoch.

The old group stops accepting client requests the moment the primary of the old group receives the RECONFIGURATION request; the new group can start processing client requests only when at least f + 1 new replicas have completed state transfer.

This can be dealt with by “warming up” the nodes before reconfiguration happens. The nodes can be brought up-to-date using state transfer while the old group continues to reply to client requests. This reduces the delay caused during reconfiguration.

This paper has presented an improved version of Viewstamped Replication, a protocol used to build replicated systems that are able to tolerate crash failures. The protocol does not require any disk writes as client requests are processed or even during view changes, yet it allows nodes to recover from failures and rejoin the group.

The paper also presents a protocol to allow for reconfigurations that change the members of the replica group, and even the failure threshold. A reconfiguration technique is necessary for the protocol to be deployed in practice since the systems of interest are typically long lived.

If you enjoyed this essay, please hit the clap button so more people see it. Thank you!

P.S. — If you made it this far and would like to receive a mail whenever I publish one of these posts, sign up here.

Want to learn how Viewstamped Replication works? Read this summary. was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-04 17:16:36

This article presents a summary of the paper “Harvest, Yield, and Scalable Tolerant Systems” published by Eric Brewer & Amando Fox in 1999. All unattributed quotes are from this paper.

The paper deals with the trade-offs between consistency and availability (CAP) for large systems. It’s very easy to point to CAP and assert that no system can have consistency and availability.

But, there is a catch. CAP has been misunderstood in a variety of ways. As Coda Hale explains in his excellent blog post “You Can’t Sacrifice Partition Tolerance”:

Of the CAP theorem’s Consistency, Availability, and Partition Tolerance, Partition Tolerance is mandatory in distributed systems. You cannot not choose it. Instead of CAP, you should think about your availability in terms of yield (percent of requests answered successfully) and harvest (percent of required data actually included in the responses) and which of these two your system will sacrifice when failures happen.

The paper focuses on increasing the availability of large scale systems by fault toleration, containment and isolation:

We assume that clients make queries to servers, in which case there are at least two metrics for correct behavior: yield, which is the probability of completing a request, and harvest, which measures the fraction of the data reflected in the response, i.e. the completeness of the answer to the query.

The two metrics, harvest and yield can be summarized as follows:

  • Harvest: data in response/total data
    For example: If one of the nodes is down in a 100 node cluster, the harvest is 99% for the duration of the fault.
  • Yield: requests completed with success/total number of requests
    Note: Yield is different from uptime. Yield deals with the number of requests, not only the time the system wasn’t able to respond to requests.

The paper argues that there are certain systems which require perfect responses to queries every single time. Also, there are systems that can tolerate imperfect answers once in a while.

To increase the overall availability of our systems, we need to carefully think through the required consistency and availability guarantees it needs to provide.

Trading Harvest for Yield — Probabilistic Availability

Nearly all systems are probabilistic whether they realize it or not. In particular, any system that is 100% available under single faults is probabilistically available overall (since there is a non-zero probability of multiple failures)

The paper talks about understanding the probabilistic nature of availability. This helps in understanding and limiting the impact of faults by making decisions about what needs to be available and what kind of faults the system can deal with.

They outline the linear degradation of harvest in case of multiple node faults. The harvest is directly proportional to the number of nodes that are functioning correctly. Therefore, it decreases/increases linearly.

Two strategies are suggested for increasing the yield:

  1. Random distribution of data on the nodes
    If one of the nodes goes down, the average-case and worst-case fault behavior doesn’t change. Yet if the distribution isn’t random, then depending on the type of data, the impact of a fault may vary.
    For example, if only one of the nodes stored information related to a user’s account balance goes down, the entire banking system will not be able to work.
  2. Replicating the most important data
    This reduces the impact in case one of the nodes containing a subset of high-priority data goes down.
    It also improves harvest.

Another notable observation made in the paper is that it is possible to replicate all your data. It doesn’t do a lot to improve your harvest/yield, but it increases the cost of operation substantially. This is because the internet works based on best-in-effort protocols which can never guarantee 100% harvest/yield.

Application Decomposition and Orthogonal Mechanisms

The second strategy focuses on the benefits of orthogonal system design.

It starts out by stating that large systems are composed of subsystems which cannot tolerate failures. But they fail in a way that allows the entire system to continue functioning with some impact on utility.

The actual benefit is the ability to provision each subsystem’s state management separately, providing strong consistency or persistent state only for the subsystems that need it, not for the entire application. The savings can be significant if only a few small subsystems require the extra complexity.

The paper states that orthogonal components are completely independent of each other. They have no run time interface to other components, unless there is a configuration interface. This allows each individual component to fail independently and minimizes its impact on the overall system.

Composition of orthogonal subsystems shifts the burden of checking for possibly harmful interactions from runtime to compile time, and deployment of orthogonal guard mechanisms improves robustness for the runtime interactions that do occur, by providing improved fault containment.

The goal of this paper was to motivate research in the field of designing fault-tolerant and highly available large scale systems.

Also, to think carefully about the consistency and availability guarantees the application needs to provide. As well as the trade offs it is capable of making in terms of harvest against yield.

If you enjoyed this paper, please hit the clap button so more people see it. Thank you.

P.S. — If you made it this far and would like to receive a mail whenever I publish one of these posts, sign up here.

Harvest, Yield, and Scalable Tolerant Systems: A Summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-04 17:13:54

This post summarizes the Raft consensus algorithm presented in the paper In Search of An Understandable Consensus Algorithm by Diego Ongaro and John Ousterhout. All pull quotes are taken from that paper.



Raft is a distributed consensus algorithm. It was designed to be easily understood. It solves the problem of getting multiple servers to agree on a shared state even in the face of failures. The shared status is usually a data structure supported by a replicated log. We need the system to be fully operational as long as a majority of the servers are up.

Raft works by electing a leader in the cluster. The leader is responsible for accepting client requests and managing the replication of the log to other servers. The data flows only in one direction: from leader to other servers.

Raft decomposes consensus into three sub-problems:

  • Leader Election: A new leader needs to be elected in case of the failure of an existing one.
  • Log replication: The leader needs to keep the logs of all servers in sync with its own through replication.
  • Safety: If one of the servers has committed a log entry at a particular index, no other server can apply a different log entry for that index.
Raft ensures these properties are true at all times.


Each server exists in one of the three states: leader, follower, or candidate.

State changes of servers
In normal operation there is exactly one leader and all of the other servers are followers. Followers are passive: they issue no requests on their own but simply respond to requests from leaders and candidates. The leader handles all client requests (if a client contacts a follower, the follower redirects it to the leader). The third state, candidate, is used to elect a new leader.

Raft divides time into terms of arbitrary length, each beginning with an election. If a candidate wins the election, it remains the leader for the rest of the term. If the vote is split, then that term ends without a leader.

The term number increases monotonically. Each server stores the current term number which is also exchanged in every communication.

.. if one server’s current term is smaller than the other’s, then it updates its current term to the larger value. If a candidate or leader discovers that its term is out of date, it immediately reverts to follower state. If a server receives a request with a stale term number, it rejects the request.

Raft makes use of two remote procedure calls (RPCs) to carry out its basic operation.

  • RequestVotes is used by candidates during elections
  • AppendEntries is used by leaders for replicating log entries and also as a heartbeat (a signal to check if a server is up or not — it doesn’t contain any log entries)

Leader election

The leader periodically sends a heartbeat to its followers to maintain authority. A leader election is triggered when a follower times out after waiting for a heartbeat from the leader. This follower transitions to the candidate state and increments its term number. After voting for itself, it issues RequestVotes RPC in parallel to others in the cluster. Three outcomes are possible:

  1. The candidate receives votes from the majority of the servers and becomes the leader. It then sends a heartbeat message to others in the cluster to establish authority.
  2. If other candidates receive AppendEntries RPC, they check for the term number. If the term number is greater than their own, they accept the server as the leader and return to follower state. If the term number is smaller, they reject the RPC and still remain a candidate.
  3. The candidate neither loses nor wins. If more than one server becomes a candidate at the same time, the vote can be split with no clear majority. In this case a new election begins after one of the candidates times out.
Raft uses randomized election timeouts to ensure that split votes are rare and that they are resolved quickly. To prevent split votes in the first place, election timeouts are chosen randomly from a fixed interval (e.g., 150–300ms). This spreads out the servers so that in most cases only a single server will time out; it wins the election and sends heartbeats before any other servers time out. The same mechanism is used to handle split votes. Each candidate restarts its randomized election timeout at the start of an election, and it waits for that timeout to elapse before starting the next election; this reduces the likelihood of another split vote in the new election.

Log Replication:

The client requests are assumed to be write-only for now. Each request consists of a command to be executed ideally by the replicated state machines of all the servers. When a leader gets a client request, it adds it to its own log as a new entry. Each entry in a log:

  • Contains the client specified command
  • Has an index to identify the position of entry in the log (the index starts from 1)
  • Has a term number to logically identify when the entry was written

It needs to replicate the entry to all the follower nodes in order to keep the logs consistent. The leader issues AppendEntries RPCs to all other servers in parallel. The leader retries this until all followers safely replicate the new entry.

When the entry is replicated to a majority of servers by the leader that created it, it is considered committed. All the previous entries, including those created by earlier leaders, are also considered committed. The leader executes the entry once it is committed and returns the result to the client.

The leader maintains the highest index it knows to be committed in its log and sends it out with the AppendEntries RPCs to its followers. Once the followers find out that the entry has been committed, it applies the entry to its state machine in order.

Raft maintains the following properties, which together constitute the Log Matching Property
• If two entries in different logs have the same index and term, then they store the same command.
• If two entries in different logs have the same index and term, then the logs are identical in all preceding entries.

When sending an AppendEntries RPC, the leader includes the term number and index of the entry that immediately precedes the new entry. If the follower cannot find a match for this entry in its own log, it rejects the request to append the new entry.

This consistency check lets the leader conclude that whenever AppendEntries returns successfully from a follower, they have identical logs until the index included in the RPC.

But the logs of leaders and followers may become inconsistent in the face of leader crashes.

In Raft, the leader handles inconsistencies by forcing the followers’ logs to duplicate its own. This means that conflicting entries in follower logs will be overwritten with entries from the leader’s log.

The leader tries to find the last index where its log matches that of the follower, deletes extra entries if any, and adds the new ones.

The leader maintains a nextIndex for each follower, which is the index of the next log entry the leader will send to that follower. When a leader first comes to power, it initializes all nextIndex values to the index just after the last one in its log.

Whenever AppendRPC returns with a failure for a follower, the leader decrements the nextIndex and issues another AppendEntries RPC. Eventually, nextIndex will reach a value where the logs converge. AppendEntries will succeed when this happens and it can remove extraneous entries (if any) and add new ones from the leaders log (if any). Hence, a successful AppendEntries from a follower guarantees that the leader’s log is consistent with it.

With this mechanism, a leader does not need to take any special actions to restore log consistency when it comes to power. It just begins normal operation, and the logs automatically converge in response to failures of the Append-Entries consistency check. A leader never overwrites or deletes entries in its own log.


Raft makes sure that the leader for a term has committed entries from all previous terms in its log. This is needed to ensure that all logs are consistent and the state machines execute the same set of commands.

During a leader election, the RequestVote RPC includes information about the candidate’s log. If the voter finds that its log it more up-to-date that the candidate, it doesn’t vote for it.

Raft determines which of two logs is more up-to-date by comparing the index and term of the last entries in the logs. If the logs have last entries with different terms, then the log with the later term is more up-to-date. If the logs end with the same term, then whichever log is longer is more up-to-date.

Cluster membership:

For the configuration change mechanism to be safe, there must be no point during the transition where it is possible for two leaders to be elected for the same term. Unfortunately, any approach where servers switch directly from the old configuration to the new configuration is unsafe.

Raft uses a two-phase approach for altering cluster membership. First, it switches to an intermediate configuration called joint consensus. Then, once that is committed, it switches over to the new configuration.

The joint consensus allows individual servers to transition between configurations at different times without compromising safety. Furthermore, joint consensus allows the cluster to continue servicing client requests throughout the configuration change.

Joint consensus combines the new and old configurations as follows:

  • Log entries are replicated to all servers in both the configurations
  • Any server from old or new can become the leader
  • Agreement requires separate majorities from both old and new configurations

When a leader receives a configuration change message, it stores and replicates the entry for join consensus C<old, new>. A server always uses the latest configuration in its log to make decisions even if it isn’t committed. When joint consensus is committed, only servers with C<old, new> in their logs can become leaders.

It is now safe for the leader to create a log entry describing C<new> and replicate it to the cluster. Again, this configuration will take effect on each server as soon as it is seen. When the new configuration has been committed under the rules of C<new>, the old configuration is irrelevant and servers not in the new configuration can be shut down.

A fantastic visualization of how Raft works can be found here.

More material such as talks, presentations, related papers and open-source implementations can be found here.

I have dug only into the details of the basic algorithm that make up Raft and the safety guarantees it provides. The paper contains lot more details and it is super approachable as the primary goal of the authors was understandability. I definitely recommend you read it even if you’ve never read any other paper before.

If you enjoyed this article, please hit the clap button below so more people see it. Thank you.

P.S. — If you made it this far and would like to receive a mail whenever I publish one of these posts, sign up here.

Understanding the Raft consensus algorithm: an academic article summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-04 17:12:44

That moment when you try to google haskell-yesod for more tutorials, stumble upon a something called Kabbalah, thinking "wow, that's such a good naming, the words fit really good together, with their Hebrew origin" and then realize it's not some piece of software, it's actual Kabbalah.

Asal Mirzaieva | code. sleep. eat. repeat | 2017-12-04 02:32:36

Well hello

So, I started to learn web programming with haskell and yesod. Yesod book was too hard for me to grasp and I couldn't find a plausible entry-level tutorial, that would not be written 5 years ago and could compile. So I took an article by  yannesposito and fixed it.

I saw some effort to fix the tutorial on the school of haskell here, but its formatting gave me an impression, that it is not maintained anymore.

Prerequisites: a basic understanding of haskell. If you lack it, I recommend you to read this.

1. Work environment setup

I use stack instead of cabal. I, together with yesod manual, really recommend you to use it. Not to mention, that using yesod you don't really have a choice. The only way to create a new project is to use stack  ¯\_(ツ)_/¯.

1.1 So, let's get us some stack!

It's as easy as running either one of the two.
curl -sSL https://get.haskellstack.org/ | sh
wget -qO- https://get.haskellstack.org/ | sh

I strongly recommend using the latest version of stack instead of just apt-getting it. Ubuntu repos often contain older and buggier versions of our favorite software.

[optional] Check what templates are available
$ stack templates

1.2 Generate a template project.

You can generate a yesod project only using stack. The init command has been removed from yesod.  Use yesod-sqlite template to store you blog entries (see "Blog" chapter). Of course, if you don't intend to go that far with this tutorial, you can use yesod-simple. So, let's create a new project called "yolo" with type yesod-sqlite.
stack new yolo yesod-sqlite

1.3 Install yesod

You should be able to run your  project, for this you have to install yesod. This takes about 20 min.
stack install yesod-bin --install-ghc

1.4 Build and launch

Warning, first build will take a looong time
stack build && stack exec -- yesod devel

And check your new website at http://localhost:3000/
(3000 is the default port for yesod).
For more detailed reference about setting up yesod look here.

My versions of stuff:
stack: Version 1.5.1, Git revision 600c1f01435a10d127938709556c1682ecfd694e
yesod-bin version:
The Glorious Glasgow Haskell Compilation System, version 8.0.2

2. Git setup

You know it's easier to live with a version control system.
git init .
git add .
git commit -m 'Initial commit'

3. Echo

Goal: going to localhost:3000/echo/word should generate a page with the same word.

Don't add the handler with `yesod add-handler`, instead, do it manually.

Add this to config/routes, thus adding a new page to the website.
/echo/#String EchoR GET
#String is the type of the input after slash and haskell's strict types prevent us from getting SQL injections, for example.
EchoR is the name of the GET request handler, GET is the type of supported requests.

And this is the handler, add it to src/Handler/Home.hs.
getEchoR :: String -> Handler Html
getEchoR theText = do
defaultLayout $ do
setTitle "My brilliant echo page!"
$(widgetFile "echo")

This tiny piece of code accomplishes a very simple task:
  • theText is the argument, that we passed through /echo/<theText is here>
  • for it we return a defaultLayout (that is specified in templates/defaultLayout.hamlet and is just a standart blank html page)
  • set page's title "My brilliant echo page!"
  • set main widget according to templates/echo.hamlet
 Also, remember that RepHtml is deprecated.

So, let's add this echo.hamlet to the <projectroot>/templates! As you can see it's just a header with the text that we passed after slash of echo/<word here>.
<h1> #{theText}

Now run and check localhost:3000/ :)

If you're getting an error like this
 Illegal view pattern:  fromPathPiece -> Just dyn_apb9
Use ViewPatterns to enable view patterns
 Illegal view pattern:  fromPathPiece -> Just dyn_aFon 

then just open your package.yaml file, that stack has automatically created for you and add the following lines just after `dependencies:` section:
default-extensions: ViewPatterns

Else if you're getting something like this
yesod: devel port unavailable
CallStack (from HasCallStack):
  error, called at ./Devel.hs:270:44 in main:Devel
that most probably you have another instance of site running and thus port 3000 is unavailable.

If you see this warning
Warning: Instead of 'ghc-options: -XViewPatterns -XViewPatterns' use
'extensions: ViewPatterns ViewPatterns'

It's okay, so far stack does not support 'extensions' section in .cabal file. Catch up with this topic in this thread.

If you see this warning
Foundation.hs:150:5: warning: [-Wincomplete-patterns]
    Pattern match(es) are non-exhaustive
    In an equation for ‘isAuthorized’:
        Patterns not matched: (EchoR _) _

That means, that you need to add this line to Foundation.hs
isAuthorized (EchoR _) _ = return Authorized 
All it does is grants permissions to access localhost/echo to everybody.

 4. Mirror

Goal: create a page /mirror with an input field, which will post the actual word and its palindrome glued together, as in book -> bookkoob or bo -> boob.

Add the following to config/routes to create a new route (i. e. a page in our case).
/mirror MirrorR GET POST

Now we just need to add a handler to src/Handler/Mirror.hs
    import Import
import qualified Data.Text as T

getMirrorR :: Handler RepHtml
getMirrorR = do
defaultLayout $ do
setTitle "You kek"
$(widgetFile "mirror")

postMirrorR :: Handler RepHtml
postMirrorR = do
postedText <- runInputPost $ ireq textField "content"
defaultLayout $ ($(widgetFile "posted"))

 Don't be overwhelmed! It's quite easy to understand.

And add the handler import to src/Application.hs, you will see a section, where all other handlers are imported

import Handler.Mirror

Mirror.hs mentions two widget files: 'mirror' and 'posted', here's their contents


<h1> Enter your text
<form method=post action=@{MirrorR}>
<input type=text name=content>
<input type=submit>

<h1>You've just posted
<p>#{postedText}#{T.reverse postedText}
<p><a href=@{MirrorR}>Get back

There is no need to add anything to .cabal or .yaml files, because stack magically deducts everything on its own :)

Don't forget to add the new route to isAuthorized like in the previous example!

Now build, launch and check out your localhost:3000, you must see something similar to my pics

stack build && stack exec -- yesod devel

And after you entered some text in the form, you should get something like this

5. Blog

Again, add Handler.Article and Handler.Blog to Application.hs imports.
This is contents of Blog.hs

{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE TypeFamilies #-}

module Handler.Blog
( getBlogR
, postBlogR
, YesodNic
, nicHtmlField

import Import
import Data.Monoid

import Yesod.Form.Nic (YesodNic, nicHtmlField)
instance YesodNic App

entryForm :: Form Article
entryForm = renderDivs $ Article
<$> areq textField "Title" Nothing
<*> areq nicHtmlField "Content" Nothing

getBlogR :: Handler RepHtml
getBlogR = do
articles <- runDB $ selectList [] [Desc ArticleTitle]
(articleWidget, enctype) <- generateFormPost entryForm
defaultLayout $ do
setTitle "kek"
$(widgetFile "articles")

postBlogR :: Handler Html
postBlogR = do
((res, articleWidget), enctype) <- runFormPost entryForm
case res of
FormSuccess article -> do
articleId <- runDB $ insert article
setMessage $ toHtml $ (articleTitle article) Import.<> " created"
redirect $ ArticleR articleId
_ -> defaultLayout $ do
setTitle "You loose, sucker!"
$(widgetFile "articleAddError")

Article.hs contents

{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE TypeFamilies #-}

module Handler.Article

import Import

getArticleR :: ArticleId -> Handler Html
getArticleR articleId = do
article <- runDB $ get404 articleId
defaultLayout $ do
setTitle $ toHtml $ articleTitle article
$(widgetFile "article")

Add this to conf/models

title Text
content Html

This to conf/routes

/blog               BlogR       GET POST
/blog/#ArticleId ArticleR GET

And add these lines to src/Foundation.hs. This is a hack, but you cannot view the contents unauthorized, right? :) Drawback: all users on the internets will be able to see your post.

    isAuthorized BlogR _ = return Authorized
isAuthorized (ArticleR _) _ = return Authorized

All done! What will you see:

I’ve been trying to wrap my mind around the fonts section of the “Thinking With Type” book.

I started by hunting for family trees for common font families. Failing to find those — likely because there’s an astonishing number of fonts out there — I started doodling around trying to get something on paper for myself.

Without further ado, here’s my best approximation of the information in the section I’ve read, with some space available for further exploration. Mostly, I think I’m baffled by how one selects a font or font family, in part due to the sheer number of fonts out there, and in part because some require money. I’ll start out playing with with google fonts, because those seem to be specific for the web, and free. Open Sans seems to be a decent default, and Patternfly uses it.

Font Categories

“Thinking With Type” starts out by explaining the history behind fonts, and structures things by that history.

Humanist (or Roman) fonts include what were originally the gothic and italic typefaces — these came from hand-written, script and body-based styles. These relied upon calligraphy and the movements of the hand.

Enlightenment fonts were based on engraving techniques and lead type, and allowed for more flexibility in what was possible. This included both Transitional and Modern typefaces, which began the process of separating and modifying pieces of a letterform. Transitional started with Baskerville’s sharper serifs and more vertical axes. Modern went to an extreme with this, with Bodoni and Didot’s thin, straight serifs, vertical axes, and sharp contrast between thick and thin lines.

Abstract fonts went even further in the direction of exaggerating the pieces of a letterform, in part because of the additional options available with industrialization and wood-cut type.

Reform and Revolution were a reaction to the abstract period, in which font makers returned to their more humanist roots.

Computer-optimized fonts were created to handle the low resolution available with CRT screens and low resolution printers.

With the advent of purely digital fonts, creators of fonts started playing with imperfect type. Others created font workhorses using flexible palettes.

This is probably better named Font History!

Humanist Fonts

Humanist fonts were based on handwriting samples.

Gothic fonts were based on German writing, such as that of Gutenberg:


Whereas the Italic fonts were based on Italian cursive writing:


These were combined by Nicolas Jenson in 1465 into the first Roman typeface, from which many typefaces sprung:

I don’t have much about the ones after Jenson.

Enlightenment Fonts

With the Enlightenment period came experimentation.

From the committee-designed romain du roi typeface, which was entirely created on a grid:


To the high contrast between the thick and thin elements from Baskerville, no longer strongly attached to calligraphy (the point at which you enter the Transitional period for fonts):


The Modern fonts from Bodoni and Didot further increased the contrast between thick and thin elements beyond Baskerville’s font.

https://en.wikipedia.org/wiki/Bodoni and https://en.wikipedia.org/wiki/Didot_(typeface)

Abstraction Fonts

In the abstraction period, the so-called Egyptian or Fat Face (now known as slab serifs) fonts came about. These were the first attempts at making type serve another function than long lines of book text, that of advertizing — otherwise known as display fontfaces.

These took the extremes of the Enlightenment period and went to extremes with them, making fonts whose thin lines were barely there, and whose thick lines were enormous.

Egyptian, or Slab Serif, from http://ilovetypography.com/2008/06/20/a-brief-history-of-type-part-5/
Fat Face, from http://ilovetypography.com/2008/06/20/a-brief-history-of-type-part-5/

Reform and Revolution Fonts

Font makers in the reform period reacted to the excesses of the abstraction period by returning to their historic roots.

Johnston (1906) used more traditional letterform styles of the Humanist period, although without serifs:


The Revolution period, on the other hand, continued experimenting with what type could do.

The De Stijl movement in particular explored the idea of the alphabet (and other forms or art) as entirely comprised of perpendicular elements:

Doesburg (1717), https://zaidadi.wordpress.com/2011/03/09/de-stijl-in-general/
Forgive the bright pink aspect of this. It’s my lighting!

Computer-Optimized Fonts

The low resolution of early monitors and printers meant that fonts needed to be composed entirely of straight lines to display well.

Wim Crouwel created the New Alphabet (1967) font type for CRT monitors:


Zuzana Licko and Rudy VanderLans created the type foundry Emigre, which includes Licko’s Lo-Res (1985) font:


Matthew Carter created the first web fonts in 1996 for Microsoft, Verdana (sans serif) and Georgia (serif):

From Wikipedia, https://en.wikipedia.org/wiki/Verdana and https://en.wikipedia.org/wiki/Georgia_(typeface)

Imperfect Type

With the freedom from the physicality of the medium (such as lead type or wood type) that came with computers, some font designers began experimenting with imperfect types.

Deck made Template Gothic (1990), which looks like it had been stencilled:


Makela made the Dead History (1990) font using vector manipulation of the existing fonts Centennial and VAG Rounded:


And Rossum and Blokland made Beowulf (1990) by changing the programming of PostScript fonts to randomize the locations of points in letters:


Workhorse Fonts

Also during the 1990s, some folks were working on fonts that were uncomplicated and functional. Licko’s Eaves pair, with their small X-heights, are good for use in larger sizes:

https://www.emigre.com/Fonts/Mrs-Eaves (1990) and https://www.emigre.com/Fonts/Mr-Eaves-Sans-and-Modern (2009)

Smeijer’s Quadraat (1992) started as a small serif font, with various weights and alternatives (sans and sans condensed) added to the family over time:


Majoor’s Scala (1990) is another simple, yet complete, typeface family:


Finally, at the turn of the century, Frere-Jones created the Gotham (2000) typeface. Among other places, it featured prominently in Barack Obama’s 2008 presidential election campaign.



In an effort to better remember various suggestions and terms used throughout the Font portion of Thinking With Fonts, I created a terminology sheet.

I’m most likely to forget that there’s multiple different items which can be understood to be quotes, and how to use them. Additionally, that larger X-heights are easier to read at small sizes.

Common Fonts?

I started making a list of common fonts, but quickly realized that this was a complex and difficult task. I’m including what I made for completeness, but it seems like a superfamily (like Open Sans) will be fine for most of my work.

What’s next for me in Typography and Visual Design?

The book discusses Text next, after an exercise in creating modular letterforms on a grid. I’m looking forward to it, but I do need a break from it for now.

I’ve started trying to mimic existing visual designs (from the collectui.com website), as many folks have suggested it’d be the best way to get a feel for what works and how to do it. I’ll likely talk more about that here, once I’m further along in that process.


Thinking With Type: Fonts was originally published in Prototypr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-12-03 04:59:06

The Linux foundation holds a lot of events where companies can reach the maintainers and developers across all the important open source software projects in enterprise, networking, embedded, IoT and cloud infrastructure.

Open Source Summit is the premier open source technical conference in North America, gathering 2,000+ developers, operators and community leadership professionals to collaborate, share information and learn about the latest in open technologies, including Linux, containers, cloud computing and more. This time they also organised a Diversity Summit.

I was really lucky and excited to be a part of the OSS NA 2017. I had sent a proposal earlier in May for delivering a talk based on my Outreachy internship with Linux Kernel from December 2016 - March 2017, which was accepted. The best part was connecting with my mentor Matthew Wilcox as well as a fellow intern who was also my co-speaker for the talk, Sandhya Bankar, though we all missed having Rik Van Riel, also my mentor for the Outreachy internship, and Julia Lawall, who manages and mentors the outreachy internships and has been really encouraging throughout.

With my mentor Matthew Wilcox

With Greg kroah-hartman
Coming to the conference, I had never experienced such a diverse collection of talks, people and organisations as in OSS NA 17. Even the backdrop of Los Angeles, its amazing food and places and wonderful networking opportunities truly made the experience enriching and magical.

With Sandhya Bankar at the partner reception at The Rooftop at The Standard.
At the evening event at Paramount Studios
At the Walt Disney Concert Hall with Jaminy Prabha
I couldn't understand the talks from CloudOpen and ContainerCon tracks much, but going around the sponsor showcase, I got a lot of background on what these very popular and upcoming technologies are about and got inspired to explore them when I go back. I had some very interesting discussions there. I attended many LinuxCon and Diversity track talks as well as keynotes which I found most interesting and took a lot home from them.

At the sponsor showcase with this amazing web representation of all the technologies there.
The opening keynote by Jim Zemlin really set up the excitement about open source and the next 4 days.

Next I really liked knowing about the CHAOSS Project and found it relevant to my research area at college. CHAOSS is a new Linux Foundation project aimed at producing integrated, open source software for analyzing software development, together with defining implementation-agnostic metrics for measuring community activity, contributions, and health.

Another highlight of the first day was the Women in Open Source Lunch sponsored by Intel. The kind of positivity, support, ideas and inspiration in that room was really one of its kind.

Being one of the few students at the summit, I found the talk, "Increasing student participation in open source" very interesting. Similarly the career fair was fun, getting to talk to engineers in popular companies and the job profiles that they are seeking. 

All the keynotes over the rest of the conference were really interesting, but the afternoon talks sometimes required too much attention to really get them and I was jet-lagged.

I particularly enjoyed the keynote by Tanmay Bakshi as well as meeting him and his family later. He talked about how he’s using cognitive and cloud computing to change the world, through his open-source initiatives, for instance, “The Cognitive Story”, meant to augment and amplify human capabilities; and “AskTanmay”, the world’s first Web-Based NLQA System, built using IBM Watson’s Cognitive Capabilities. It was inspiring to learn how passionate he is about technology at such a young age.

A highlight from the third day was my mentor's talk on Replacing the Radix Tree which was a follow-up thread to my outreachy internship and inspired me to contribute to the new XArray API and test suite.

I am grateful to all from the Linux community and all those who support the outreachy internships, and Marina Zhurakhinskaya and Sarah Sharp. I'm also really grateful to The Linux foundation and Outreachy programme for sponsoring my trip.

Rehas Mehar Kaur Sachdeva | Let's Share! | 2017-12-03 01:38:30

UX folks may be in the best position to identify ethical issues in their companies. Should it be their responsibility?

In the previous section, I described the state of UX practice at technology companies, and the need for high-level buy-in for successful UX integration.

There is a concerning — and increasingly evident — lack of ethical consideration in the processes of most software companies. In this section, I will describe some of the ways in which this has recently become more apparent.

Digital Ethics

The software in our lives are not generally designed with our health and well-being in mind. This fact is becoming clear as Facebook, Google, and Twitter are in the spotlight relating to Russia’s interference with our elections and increasing political divides. Twitter has also typically been unwilling to do much about threats or hate speech.

There is too much focus on engagement and creating addiction in users, and not enough on how things might go bad and appropriate ways to handle that.

Internet of Things (IoT)

There’s a proliferation of products in the Internet of Things (IoT) space, many of which are completely insecure and thus easily turned into a botnet, have the private information on them exploited, or hacked to be used as an information gathering device.

Effects on Kids

Some IoT devices are specifically targeted at kids, but few or no companies have put any effort into identifying how they will affect the development of the children who use them. Concerned researchers at the MIT Media Lab have begun to study the effects of intelligent devices and toys on kids, but this won’t stop the continued development of these devices.

Similarly, it’s unclear how the use of devices that were originally aimed at adults — such as Alexa — will affect the kids in those houses. On one hand, it doesn’t involve screen time, which is no longer completely contraindicated for kids under two but is still wise to limit. On the other hand, we have no idea how those devices will answer questions they were not programmed to handle. Additionally, these devices do not encourage kids to use good manners — one of the important lubricants for the fabric of society. It’s hard enough to teach kids manners without having that teaching undermined by an intelligent device!

Finally, consider how machine learning can result in some truly horrific scenarios (content warning: the linked essay describes disturbing things and links to disturbing graphic and video content).

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatize, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.
James Bridle · Writer and Artist

Willful ignorance: Twitter and Equifax

Similarly, we’ve seen the results of a focus on metrics and money over security and sanity. Twitter not only knew that there were spam and fake accounts from Russia and the Ukraine in 2015, but refused to remove them because

“They were more concerned with growth numbers than fake and compromised accounts,” Miley told Bloomberg.”

Equifax stores highly sensitive information about people in the US, and left security vulnerabilities open for months after being told about them. As a result, they had multiple security breaches, basically screwing over anyone whose data was stolen.

Yeah, no. You knew you had vulnerabilities!

Thoughtlessness: Google, Facebook, and Big Data

Even without willful ignorance, thoughtlessness alone can easily be enough to put individuals, communities, and societies at risk.

Considering the breadth of data that many companies are collecting on those who use their products, there is a worrying lack of thought given to the invasiveness of this practice and to how to safeguard the data in question. These companies often make poor choices in what information to keep, how to secure and anonymize the information, and who has access to that information.

Some might say that having conversational devices like Alexa and Google Home are worth the privacy risks inherent in an always-on listening device. Others might suggest that it’s already too late, given that Siri and Google Now have been listening to us and our friends through our phones for a long time now.

However, regardless of one’s thoughts on the timing of the concerns, the fact remains that tech giants have access to an amazing amount of information about us. This information is collected through our phones, through our searches and purchasing patterns, and sometimes through devices like the Amazon Echo and the Google Home Mini.

Some companies are better than others, such as Apple’s refusal to break their encryption for the FBI, but it can be quite difficult to identify which and where companies are making the best choices for their customers privacy, safety, and sanity.

Machine Learning

Take machine learning (also known as AI), and the fact that companies are more interested in selling ads than considering the effects their software has on their customers:

It’s not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. […] But it’s not the intent or the statements people in technology make that matter, it’s the structures and business models they’re building. […] Either Facebook is a giant con of half a trillion dollars and ads don’t work on the site, it doesn’t work as a persuasion architecture, or its power of influence is of great concern. It’s either one or the other. It’s similar for Google, too.
Zeynep Tufekci · Techno-sociologist

One of the major problems with machine learning is that we have _no idea_ precisely what associations any particular algorithm has learned. The programmers of those algorithms just say whether the output those algorithms provide is good enough, and often ‘good enough’ doesn’t take into account the effects on individuals, communities, and society.

I hope you begin to understand why ethics is a big concern among the UX folks I follow and converse with. At the moment, the ethics of digital products is a big free-for-all. Maybe there was a time when ethics wasn’t as relevant, and code really was just code. Now is not that time.

In part 3, I’ll discuss the positioning of UX people to more easily notice these issues, and the challenges involved in raising concerns about ethics and ethical responsibility.

Thanks to Alex Feinman, Máirín Duffy, and Emily Lawrence for their feedback on this thread!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-12-01 14:34:13

User Experience (UX) folks may be in the best position to identify ethical issues in their companies. Should it be their responsibility?

This will be a multi-part story.

In this first part, I’m going explain some of the problems inherent in the implementation of UX practices at technology companies today, to provide the background necessary to make my point.

You can also skip ahead to part two, in which I talk about ethics in the tech industry today.

First: Why do Businesses want UX?

Poor user experience = burning your money

Businesses are starting to realize that they need to incorporate UX to retain and increase their customer base. Discussions with Boston-area user experience folks suggests that companies have figured out that they need to have incorporated UX years ago, and that they’re behind.

Many of those businesses are so new to UX that they don’t understand what it means. Part of the reason for this is that ‘UX’ is an umbrella term, typically including:

  • user research
  • information architecture (or IA)
  • interaction design (or IxD)
  • content specialists
  • visual design

In addition, some UX teams include front-end developers, as it can otherwise be difficult to be certain that the developers implementing the interface have a basic understanding of user experience.

User Experience is complicated!

When looking for UX employees, some businesses end up throwing the kitchen sink into their job descriptions, or look for the extremely rare UX unicorn — someone skilled at all parts of UX as well as development. This unfortunately makes it approximately impossible that they will get what they need, or possibly that they will get any decent candidates at all.

Often, people expect the UX unicorn to be able to do all aspects of UX and write code. This version is more reasonable: to understand how coding works, even if you don’t do it.

Other employers prioritize visual or graphic design skills over the skills necessary to understand users, because they have gotten the impression that ‘making it pretty’ will keep their customers from leaving. Often the problem is at a much deeper level: the product in question was never designed with the user’s needs in mind.

Successful UX needs high-level buy-in

Unfortunately, UX professionals brought into a company without buy-in at the top level of the company nearly guarantees that the UX person will fail. In addition to their regular UX work, they will also be stuck with the job of trying to sell UX to the rest of the company. Without support from higher-ups in the company, it is nearly impossible for a single person to make the amount of change necessary.

Surveying local people, I learned that being the only UX person in a small company or startup is probably doable, if the company understands the value you bring. There are fewer people to convince, and usually fewer products to deal with.

However, being the only UX person in a big company will likely be an exercise in frustration and burnout. On top of the fact that you’re trying to do too many different things on your own , you’ve also got to try to keep the bigger picture in mind.

Some important long-term questions include:

  • “What are the right strategic directions to go in?”
  • “Are the things that you are creating potentially going to cause or enable harm?”

The second question brings us to the question of “who in high tech is thinking about the ethics of their creations?”. Unfortunately, too often, the answer is ‘no one’, which I will discuss in Part 2.

Thank you to Alex Feinman and Máirín Duffy for their feedback on this article!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-12-01 14:32:33

To conclude my internship with Mozilla, I did a video presentation on Air Mozilla. This is a high-level overview of the solution I designed to automate accessibility testing.

Below are the slides, followed by the transcript and video.

Automating Web Accessibility Testing

Hi, my name is Kimberly Sereduck. I recently completed an internship with Mozilla through the GNOME Outreachy program. As part of the test engineering team, the goal of my internship was to automate web accessibility testing.

Which, for most people, probably brings up two new questions in itself:
What is Automated Testing?
What is Web Accessibility?

What is automated testing?

In order to ensure the quality of software and websites, software engineers and test engineers write tests to check that the product works as it should. Automated tests often run on a set schedule, or when an update is released.

What is Web Accessibility?

Accessibility is a word we have probably all heard before, but you may or may not understand what it means. Even while working as a web developer, I wasn’t sure exactly what it was, or why it was important.

Imagine for a moment that, when browsing the web, your knowledge of what a page contains was based exclusively on dialogue from a Screen Reader.

A screen reader is a type of software that will read aloud the contents of a page, including headers, links, alternative text for images, input elements, and more.

This technology allows users with limited vision to navigate the resources of the internet.

However, if a website is not designed with accessibility in mind, it can still be very difficult or even impossible for these types of users to understand what a page contains, or to interact with it.

This is just one example of a user who requires an accessible web. There are many different types of users, who utilize different types of assistive technologies.

This brings me to the topic of web accessibility testing.

Web accessibility testing is simply an analysis of how accessible a website is.

There are detailed standards of web accessibility called the Web Content Accessibility Guidelines, or WCAG for short.

An example of one of these rules is that all images must contain alternative text. This text would be read aloud by a screen reader.

If an image is used to help convey a point, providing an alt attribute allows non-sighted users to understand the context more fully.

In the example here, a graph is an image used to convey information, and not used just for display purposes.

A good alternative text would be to concisely describe what information the graph contains.

Problem Domain

Accessibility testing for websites, is, at this point, largely manual work. There are many tools that exist to generate accessibility reports for websites. These are mostly limited to browser-based tools.

Here is an example of one of the browser-based tools.

Most of these tools do not offer much in the way of customization or flexibility.

Most of them return a list or report of all accessibility violations found on a single page.

However, almost all websites are comprised of multiple pages, and some may even have dozens of different types of pages.

Using one of these browser-based accessibility tests would require someone manually running the tests on each different type of page, reviewing the results, and creating an accessibility report for the site.

For most companies that address the accessibility of their websites, this is done on an irregular basis, and is not integrated into their automated testing workflow.

An example of what I mean by automated testing workflow:

If an update is made to the website download.mozilla.org, which results in users not being able to download Firefox, there are automated tests in place to catch these errors.

This enables test engineers to be notified right away, rather than the problem going unnoticed until someone happens to catch it.

This type of testing is called regression testing. Regression tests make sure that features that worked before an update still work after the update.

What Problem Does This Solve?

The problem that this project solves is to integrate regression testing for web accessibility into this automated workflow.

So, if a site is accessible, and an update is released that makes it less accessible, these tests would notify test engineers of the new accessibility violations.

To make this possible, I have written software that allows python programmers to make use of a tool called aXe in their python tests.

aXe is a an API created by Deque Systems, a company that specializes in web accessibility.

aXe can be run against a web page and return a list of all violations of accessibility rules.

aXe also allows customization, such as including or excluding certain accessibility rules, or testing specific parts of a page.

The software I have written, called axe-selenium-python, maintains this ability to customize your accessibility tests.

The way this software works is that it is included in existing automated tests, adding accessibility checks for each page that the tests visit.

This puts accessibility on the same level as functionality, and accessibility reports will be generated every time a test suite is run, rather than manually and infrequently.

Creating a more inclusive web is a top priority to the people at Mozilla.

Mozilla has a dedicated accessibility team that does check their websites for accessibility.

However, we would like to give the test engineering team the ability to include accessibility checks in their automated testing workflow.

My work will enable Mozilla to test the accessibility of its websites on a more regular basis, and do it using much less time and resources.

Goals for the Future

Although the software I have written is functional and currently available for public use, there are many goals to improve it, making it more simple to use and more beneficial.


There are three main goals for improving the package I have written. The first is to devise a production-ready solution for testing individual accessibility rules, rather than running a single accessibility check for each page.

The reason for this is two-fold.

Without individual tests, there is a single PASS or FAIL for accessibility. If ANY rule is violated, this test fails. This also means that the results of the failing rules are shown in a single report.

However, if individual tests exist, there will be a pass or fail for each accessibility rule that is tested for, making the results more readable.

Test engineers would be able to quickly see the number of violations, and which rules were violated.

Having individual tests also allows the ability to mark tests as expected failures.

In the world of test engineering, when a failure is spotted, typically a bug or issue is filed, and the test will continue to fail until the bug is fixed. It is not uncommon for some issues to go unaddressed for weeks.

In most cases, you don’t want to keep getting notifications of a failing test after a bug has been filed.

So the tests are marked as expected failures, and instead of being notified when they fail, the test engineers will be notified when the test begins passing again.

I want to enable this same functionality with my software.

While I have succeeded with a couple of different approaches, neither of them can be easily integrated into existing test suites.

I want my solution to be as hassle-free as possible, allowing users to easily add and customize these individual tests.

The third goal for improving this package is to generate more sophisticated accessibility reports. I want to allow users to view a single report for accessibility, one that is well designed and presents the information in a way that is easier to understand.


A long-term goal that I have as far as the company is concerned is to develop an Accessibility dashboard for all of Mozilla’s web assets.

This dashboard would use the data from the most recent accessibility tests, and give each Mozilla website a grade or rating for accessibility.

This dashboard would be included with others that are currently displayed in several of the Mozilla offices around the world.

Producing this dashboard would increase the visibility and awareness of Mozilla’s accessibility status, which, in turn, would have a significant impact on Mozilla’s goal of creating a more inclusive web.


Another goal I have for this project is to involve community contributors in Mozilla’s accessibility goals.

One idea for increasing community involvement is by adding a new feature to my axe-selenium-python package, or to write a new package altogether.

This feature or package would automatically file a bug when a new accessibility violation is found.

Most accessibility violations are fairly easy to fix, and require only a basic knowledge of HTML.

As such, they would make great First Bugs for people who have never contributed to Mozilla or an open-source project before.

This would help increase community contributions overall, and also to expedite the process of fixing these accessibility violations.

This feature could tremendously affect Mozilla’s inclusivity goals, and do so without adding a large workload to full-time employees.


More and more people every day have access to the internet.

Millions of these users have limited vision or limited physical ability, and rely on technologies other than a mouse and a screen to navigate the web.

The internet is a vast resource that opens up a world of knowledge that humans have never had access to before. It is becoming increasingly important in all aspects of life.

Knowledge, to me, is not a privilege, but a basic human right. As such, I believe that the resources of the internet should be made accessible to all types of users.

An accessible web also allows all people to participate more actively in society.

I am very fortunate to have had an opportunity to help create a more inclusive web. I hope to continue to do so, and make an even more significant impact in the world of accessibility and inclusivity.

If I seem terribly uncomfortable and out of breath, I was! I was almost 9 months pregnant when I recorded this presentation, so please, pity me.

The post Air Mozilla Presentation – Automating Web Accessibility Testing appeared first on Kimberly the Geek.

Kimberly Pennington | Kimberly the Geek | 2017-11-30 15:57:55

Ferris the crab, unofficial mascot for Rust [http://www.rustacean.net/]

Welcome, new Rustacean! So you have decided that you want to start learning Rust. You are where I was in summer 2017. I have read a lot of stuff online, tinkered with some byte-sized newbie contributions to the Rust project, and generally just hung out with the Rust community both virtually and in person (at Rust Belt Rust in Columbus, OH). In case you start feeling a little lost and overwhelmed on where to look, let me give you the Newbie’s Guide of what I found to be super useful. There is still a steep learning curve for Rust and you will need to put in the time and practice. However, certain channels will superboost your initiation into the friendly Rustacean community before you become lost and frustrated.

One thing to note is that the Rust language has rapidly evolved in the past couple years, so some of the posts and examples online from 2015 and earlier may not be considered idiomatic anymore. This post will be point you towards resources that are regularly updated.

Learning Rust

Most of the Rust docs are quite intimidating. I would start with the Rust book while completing Rust exercism exercises and tinkering with Rust code in play.rust-lang.org. You can import some external crates in the Rust Playground. (Examples and list of supported crates)

It’s important to actually be writing Rust code and running it through the compiler while reading the book. I had read a few chapters of the Rust book and thought I knew Rust, but then realized I didn’t really understand the concepts well when I tried to write code that compiles.

Checking out the Rust community

Cool! So now you know some Rust and getting an idea of the syntax, semantics and concepts. What next?

Check out https://users.rust-lang.org/. One thing that is so cool about the Rust Language project is that it is transparent in how the language evolves. It is fascinating to read the threads and learn about how decisions are made for language development as well as increasing the usage and ergonomics of Rust for everyone. This includes Rust for developers that write production code, hobbyists, embedded systems, IDE support, Rubyists, Pythonistas, “frustrated C++ programmers”, and everyone and anyone that wants to know and learn more about Rust! :D

Look for a local Rust meetup to meet other Rustaceans in your area.

Community Updates

  • Subscribe to This Week in Rust.
  • Look for easy and help wanted issues to contribute to the Rust projects. You can “watch” this thread and get notified about new requests for contributions.
  • There is mentoring for all experience levels during the Impl period.



There are two podcasts I follow right now:

  • New Rustacean (host: Chris Krycho), broad range of topics on concepts, new crates, and interviews with key Rust team members)
  • Rusty Spike (host: Jonathan Turner — Rust community team), latest updates of happenings in Rust community and Servo project


  • Rust talks online (Rust YouTube channel). There are 3 conference series: RustConf (West USA), Rust Belt Rust (Rust Belt, Midwest USA), and RustFest (Europe).
  • Follow @RustVideos on Twitter


If you are still looking for more links to add to your heap (heaps can grow, stacks not B-)): https://github.com/rust-unofficial/awesome-rust

Best of luck in your journey and hope to see you around in the Rust community!

Anna Liao | Stories by Anna Liao on Medium | 2017-11-29 17:29:57

Well hello

So I started to learn haskell.
And I know it's old news, but still, look how short haskell's quick sort implementation is!

qsort :: (Ord a) => [a] -> [a]
qsort [] = []
qsort (x:xs) =
let smaller = qsort (filter (<= x) xs)
bigger = qsort (filter (> x) xs)
in smaller ++ [x] ++ bigger

And my first steps with haskell are:
  1. learnyouahaskell
  2. exercism/haskell
  3. https://www.haskell.org/documentation
  4. Real World Haskell, but I must say, that this book is incredibly vague, obfuscated and is the opposite of clear. It is the most hard to get book that I've read so far, and it's not that haskell is complicated, no. It's the book.
  5.  Web services for haskell -- yesod (how-to).

Asal Mirzaieva | code. sleep. eat. repeat | 2017-11-28 06:12:40

Last week in Germany, a few miles away from the meeting in COP23 Conference of political leaders & activists to discuss climate there was a bunch, (100 to be exact) of developers and environmentalists participating in Hack4Climate to work on the same global problem – Climate Change.

COP23, Conference of the Parties happens yearly to discuss and plan action about combating climate change, especially the Paris Agreement. This year, it took place in Bonn, Germany which is the home to United Nations Campus. Despite the ongoing efforts by the government, it’s the need of the hour that every single person living on the Earth, contributes at an personal level to fight this problem. After all, we all have, including myself, somehow contributed to the hike in climate change either knowingly or unknowingly. That’s where role of technology comes in. To create a solution by provide pool of resources and correct facts such that everyone can start taking healthy steps.

I will try to put into words explaining all about the thrilling experience Pranav Jain and I had in participating as 2 of the 100 participants selected all over the world earth for Hack4Climate. Pranav was also working closely with Rockstar Recruiting and Hack4Climate team to spread awareness and bring more participants before the actual event. It was a 4 day hackathon which took place in a *cruise* in front of the United Nations Campus. Before the hackathon began we had informative sessions from the delegates  of various institutions and organisation like UNFCC – United Nations Framework Convention on Climate Change and MIT Media Lab, IOTA, Ethereum. These sessions helped us all to get more insight into the climate problem from a technical and environmental angle. We focussed on using Distributed Ledger Technology – Blockchain & Open Source which can potentially help to combat climate change.

1 (1)
Venue of Hack4Climte – The Scenic Crystal Cruise stopping by the UN Campus in Bonn, Germany  (Source)


The 20 teams worked on creating solutions which could be fit into areas like identifying and tracking emissions, carbon pricing, distributed energy, sustainable land use, and sustainable transport.

Pranav Jain and I worked on Green – Low Carbon Diamonds through our solution, Chain4Change. We used blockchain to track the carbon emission in the mining of the mineral particularly, diamond. Our project helps in tracking the process of mining, cutting, polishing for every unique diamond which is available for purchase. It could also certify a carbon offset for each process and help the diamond company improve efficiency and save money. Our objective was to track carbon emission throughout the supply chain where we considered the kind of machine, transport and power being used. The technologies used in our solution are Solidity, Android, Python & Web3JS. We integrated all of them on a single platform.

We wanted to raise awareness among the common customers by putting the numbers (carbon footprint) before them such that they know how much energy and fossils were consumed for the particular mineral. This would help them make a smart and climate friendly and a greener decision during their purchase. After all, our climate is more precious than diamonds.

All project tracks had support from a particular company, who gave more insights and support for data and business model. Our project track was sponsored by EverLedger, a company which believes that transparency is the key to ensure ethical trade. 

Copy of H4C-Slides
Project flow, Source – EverLedger

Everledger’s CEO, Leanne talked about women in technology and swiftly made us realize how we need equal representation of all genders to tackle the global problem. I talked about Outreachy with other female participants and amidst such a diverse set of participants, I felt really connected with a few people I met who were open source contributors. Open source community has always been very warm and fun to interact with. We exchanged what conferences we attended like Fosdem, DebConf and what projects we worked on. Outreachy current round 15 is ongoing however, the applications for the next round 16 of Outreachy internships will open in February 2018 for the May to August 2018 internship round. You can check this link here for more information on projects under Debian and Outreachy. Good luck!

Lastly and most importantly, Thank you Nick Beglinger, (CleanTech21 CEO) and his team who put up this extraordinary event despite the initial challenges and made us all believe that yes we can combat climate change by moving further, faster and together.

Thank you Debian, for always supporting us:)

A few pictures…

Pranav Jain picthing the final product
Scenic Crystal, Rhine River and Hack4Climate Tee

Chain4Change Team Members – Pranav Jain, Toshant Sharma, Urvika Gola

Thanks for reading!

Urvika Gola | Urvika Gola | 2017-11-26 12:39:28

This blog was created because I am supposed to report my journey through the Outreachy internship.

Let me start by saying that I'm biased towards systems that use flat files for blogs instead of the ones that require a database. It is so much easier to make the posts available through other means (such as having them backed up in a Git repository) that assure their content will live on even if the site is taken down or dies. It is also so much better to download the content this way, instead of pulling down a huge database file, which may cost a significant amount of money to transfer that amount of data. Having flat files with your content with a format that is shared among many systems (such as Markdown) might also assure a smooth transition to a new system, should the change become a necessity at some point.

I have experimented some options while working on projects. I played with Lektor while contributing to PyBeeWare. I liked Lektor, but I found it's documentation severely lacking. I worked with Grav while we were working towards getting tem.blog.br back online. Grav is a good CMS and it is definitely an alternative to Wordpress, but, well, it needs a server to host it.

At first, I thought about using Jekyll. It is a good site generator and it even has a Code Academy course on how to create a website and deploy it to Github Pages that I took a while ago. I could have chosen it to develop this blog, but it is written in Ruby. Which is fine, of course. The first steps I took into learning how to program were in Ruby, using Chris Pine's Learn to Program | versão pt-br. So, what is my objection with Ruby? It so happens that I expect most of the development for the Outreachy project will be done using Python (and maybe some Javascript) and I thought that adding a third language might make my life a bit harder.

That is how I ended up with Pelican. I had played a bit with it while contributing to the PyLadies Brazil website. During Python Sul, the regional Python conference we had last September, we also had a sprint to make the PyLadies Caxias do Sul website using Pelican and hosting it with Github Pages. It went smoothly. Look how awesome it turned out:

Image of the PyLadies Caxias do Sul website, white background and purple text

So, how to do one of those? Hang on tight, that I will explain it in detail on my next post! ;)

Renata D'Avila | Renata's blog | 2017-11-26 12:00:00

A lot of companies out there seem to want UX visual design skills more than they want UX research skills. I’ve often felt like I’m missing something important and useful by not having a strong grounding in visual design, and have been searching far and wide for some ideas of how to learn it.

One of the more interesting suggestions I have had relates to typography: many websites have typography and grid principles incorporated into them, so that is a good place to start. I’ve also had a number of suggestions to just make things, with pointers to where to get ideas of what to make. Below are the suggestions that make the most sense to me.

Typography to start?

A helpful fellow volunteer (Tezzica at Behance and other places — trained in graphic design with a UX aspect at MassArt) at the UX Fair offered me a number of useful ideas, including the strong recommendation that I read the book called “Thinking With Type”, by Ellen Lupton. This books is, if nothing else, a very entertaining introduction to the various types and type families. There is the history of various fonts and types, descriptions of the pieces of a piece of type, and examples both good and bad (she calls the latter “type crimes” and explains why they are type crimes). I’m only 1/3 of the way through it, so I’m sure there’s a lot more to it.

Tezzica also suggested that I take the SkillShare course by the same author, Typography that Works. Given that I currently have free access, I am in fact doing that. Some of what we’ve covered, I knew from previous courses (grids, mostly), and some recapped a bit of what I’ve read in the book thus far. Reminders and different types of media are really useful.

I’m unexpectedly bemused by the current section, in which we are to start designing a business card. While I found the ‘business card’ size in Inkscape, I’m not completely sure that I’m managing to understand how to make the text do what I want it to do. I suspect that a lot of visual/graphic design is in figuring out how to make the tools do what you want, and then developing a better feel for ‘good’ vs ‘bad’ with practice! (I’m currently playing with Gravit Designer, which is a great deal easier to use while still being vector graphics.)

I’ve also had a chat with one of the folks I interviewed about getting a job in Boston, Sam, who had gotten a job between me talking to him and interviewing him. He also strongly suggested typography, and seems to have already worked through a lot of the problems I’m struggling with: not a lot of understanding of how visual design works, but a strong pull toward figuring it out.

Another thing that Tezzica mentioned was assignments she’d had in school where basically they had to play around with type. In one, the challenge was to make a bunch of graphics which were basically combining letters of two different typefaces into a single thing, or a ‘combined letterform’.

What do graphic design students do?

Tezzica suggested that it would be useful to peruse Behance for students of RISD and MassArt and see where the samples look similar, and potentially identify the assignments from classes at those schools. I have thus far not been successful in this particular endeavor.

Another possible way to find assignments is to peruse tumblr or pintrest and see if any old assignments or class schedules are still there. Also thus far unsuccessful!

Both Tezzica and Sam suggested doing daily challenges (on Behance, since the accounts there don’t require someone else to invite you) using ideas from dailyui.co or dribble. Tezzica also suggested taking a look at common challenge solutions and seeing if there’s an interesting and different way to do it. Tezzica also pointed out the sharpen.design website and its randomized design prompts.

Sam suggested taking a website that I like the look of, and trying to replicate it in my favorite graphic design tool (this will probably end up being Inkscape, even though it’s not as user-friendly as I’d like), and pointed out that it could go onto my portfolio with an explanation of what I was thinking while I did it.


Tezzica suggested a Hand Lettering course by Timothy Goodman and a Just Make Stuff course by him and Jessica Walsh (this one being largely about ‘making something already’). She also suggested Nicholas Felton’s Data Visualization courses (introduction to data visualization, and designing with processing). Both are on Skillshare.

Sam suggested I watch everything I can from John McWade on Lynda.com, and a graphic design foundations: typography course also on lynda.com.

Other training methods

Finally, Sam recommended taking screenshots and making notes of what I notice about sites that are interesting or effective and why.

This reminds me a bit of my periodic intent to notice what design patterns and informational architecture categorization methods websites use.

Mostly, I need to train my eye and my hand, both of which require practice. Focused practice, and I think between Sam and Tezzica, I have a good sense of where to go with it. At the moment, I’m focusing on the Thinking With Type book and course, as otherwise I’ll overwhelm myself.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-11-22 23:37:38