Open Source Summit is an open source technical conference in North America, gathering 2,000+ developers, operators and community leadership professionals to collaborate, share information and learn about the latest in open source technologies, including Linux, containers, cloud computing and more.
At the Open Source Summit, you can expect to collaborate, share information and learn across a wide variety of topics like LinuxCon, ContainerCon, CloudOpen. read more


My last week was the best  week  than any other. Last week it was a great happened in Open Source Summit North America 2017  I was a speaker at the conference.  radix-tree-idr-apis-and-their-test-suite I presented "Implementing IDR in file descriptor allocation code path". My  project deals with converting custom file descriptors allocation code to use the IDR resulting in memory saving for processes with relatively few open files and improving the performance of workloads with very large number of open files.  It was a very different experience. People loved my subject and internship.They enjoyed so much. I would like to say Special Thanks to my Mentors Matthew Wilcox and Rik Van Riel. I also want to give Thanks to Laura Abbott, Marina  Zhurakhinskaya , Sarah Sharp, Helen Koike.  I met a lot of people there. They told me about their work, the projects going on in Open Source and a lot of other things.



He has spoken a few minutes about open source in 2017 and it's been a pretty incredible year and open source and you know things just keep getting better and today I thought I would talk to everyone about just how open source isn't growing it's actually accelerating exponentially in terms of its influence in technology and in society and-and these numbers are just amazing there are 23 million open-source developers worldwide I think they're like 22 million Republic accounts on GitHub 64 million repositories 41 million lines of code eleven hundred new projects every single day and then there are 10,000 new versions of different open source projects every single day it's pretty amazing just the growth and the sheer number of developers and projects in open source and even within individual projects speed is accelerating.
I liked Jim's keynote too much. The open source development he explained very beautifully. I met him personally later and he is very nice, he talked to me despite being very great.


He’s using cognitive and cloud computing to change the world, through his open-source initiatives, for instance, “The Cognitive Story”, meant to augment and amplify human capabilities; and “AskTanmay”, the world’s first Web-Based NLQA System, built using IBM Watson’s Cognitive Capabilities. His learning and sharing experience of 8 years in the fields of software and cognitive development, has taught him how to best utilize and apply AI in fields ranging from banking and finance to healthcare and medicine to music and chatbots.

He is very simple. When I met him, I felt like he is my younger brother and we know each other from a long time. He explained keynote on Present and Future of Tech - AI. It was amazing for me. he likes India so much. Her mother said he always tries to come in India. Tanmay's mom and dad are a very humble person.

I have attended the following keynote by,



  • KEYNOTE: THE DIFFERENCE BETWEEN A CROWD AND A COMMUNITY - JOSEPH                                                                                                       GORDON-LEVITT


  • KEYNOTE LIGHTNING TALK: OPENNESS IN A CLOUD CENTRIC WORLD - APARNA                                                                                                                       SINHA



I also attended some following Talks,

Matthew Wilcox

The new XArray is easier to use than the radix tree. Conceptually, it is an array of 16 quintillion pointers, all of which are initially NULL. Just like an array, its basic operations are 'load' and 'store', unlike a tree's 'lookup', 'insert' and 'delete'. It provides some more advanced operations and enables users to build their own operations. This talk covers general aspects of API design for C programmers, as well as particular considerations for kernel API design due to the constrained environment.

Replacing the Radix Tree - Matthew Wilcox, Microsoft

Paul McKenney

This talk will look at a few representative RCU CPU stall warning messages and show how they can be decoded into real information that can help you find otherwise silent hangs the easy way. 
Decoding Those Inscrutable RCU CPU Stall Warnings - Paul McKenney, IBM

Gabriel Krisman Bertazi 

What influences a program's performance? Some reasons are quite obvious, like the algorithm implemented and the number of execution cycles, but what about the order in which libraries were linked? Or the shell environment size? Or even the sequence and which compiler optimizations were applied? In fact, modern computer systems include such a multitude of features and options, whose interaction with each other can affect the workload's performance, that it is surprisingly hard to write code that fully benefits from the potential of the CPU. In this talk, we will discuss how small changes in the code and in the execution environment can impact the execution time and how you can use Linux performance assessment tools, like perf and valgrind, to detect and mitigate such pitfalls.

Sarah Sharp

Sarah is also a co-coordinator for Outreachy, a paid internship program for increasing diversity in open source projects. Applications are open to women (cis and trans), trans men, and genderqueer people, and United States residents of any gender who are Black/African American, Hispanic/Latin@, American Indian, Alaska Native, Native Hawaiian, or Pacific Islander.
I met her and I really enjoyed talking to Sarah.

Sarah is a Linux and open source developer and has been running Debian-based Linux systems since 2003. She was a Linux kernel developer from 2006 to 2013 and is the original author of the Linux USB 3.0 xHCI host controller driver.

I clicked some pictures with people. I met people I wanted to meet them a lot of days

Greg kroah-hartman

My first kernel patch was accepted by gregkh. It was really beautiful feeling.

Nithya Ruff

I also enjoyed  Nithya's Talk on First 90 Days - Building an OSS Practice. I believe her talk has helped me see successful open source project and community work in very first 90 days.

  Angela Brown

Tameika Reed

Marina Zhurakhinskaya

Hong Phuc Dang , FOSSASIA

Finally my Cospeaker and Outreachy friends,
  • Rehas Sachdeva
  • Amitoj Kour Chawala
  • Jaminy Prabhakaran

I would like to thanks to The Linux Foundation and Outreachy for sponsoring my trip.

Sandhya Babanrao Bankar | Kernel Stuff | 2017-10-18 11:07:16

Scarlett Clark | Home of Scarlett Gately Clark | 2017-10-18 09:04:06

How did I even get here

Yes it was another year for me in Capetown for pyconZA. I really dint plan to go for PyconZA this year but I guess it was kinda predestined. From October 3rd to 8th I was enjoying the great company of pythonistas at pyconZA in southAfrica.

I talked to my outreachy mentor and decided I should use my outreachy travel allowance to attend pyconZA. I was pretty much inspired my the fact that I had an accepted talk again this year.

I had a couple of annoying flight reschedules and alot of mumbo jumbo abit of getting sick but finally making it for the fun!!!!!

My happenings

I gave a talk

I had an accepted talk about Pythn 2 and 3 compatibilit that I gave. This talk follows a series of such I will be making so often being the theme for a My first Book I am publishing with Apress. The talk was attended by a good number of people.

And yeah participated in the Sprints

I was really waiting for this moment of the conference where I would make some contributions to an open source project. I was able to contribute to a python debugger called Birdseye. I suggested a feature which I in turn implemented with great help from Alex Hall the maintainer of the project.

Ohh I met the pypy team (some of them)

It was also a great experience meeting and chatting with the pypy team. This is a project I have always wanted to contribute to (I have a few merged PRs). I asked Ronan Lamy one of the core developers all the questions I ever had on pypy which he answered to my satisfaction. It was like great joy meeting the people that have reviewed your work in person.

I also discussed my plot on attending the annual pypy sprint next year around February or March with them and I am so really looking forward to it :)

My Take aways From the Conference

Flavio’s keynote on day one really stood out for me. He was talking about how we can choose the right battles in our work. He talked about how we can handle a crazy schedule something I really wanted to hear. This was my reaction.

There was also some great T’s from Offerzen. Sounds suprising but I rarely appreciate conference T-shirts but this one blew me away. The color was an improvement white atleast but with well thought art work of a unicorn :)

I had to get a moment in this T mahn!!!

I made alot of friends talked to very interesting people that I cant mention here because I dont have forever to spend on writing this blog.

Things I failed to do

Hahaha there were also things I failed to. I planned to tour capetown in detail(emphasis on detail) and go to the beach (any one?) but I ended up coding for my whole stay this place :( . Blame it on these sprints and partly the weather though the later not so much.


It was a great conference . Now I know I said the same thing last year but this year it was exceptional. The talks were really thought out and the people were amazing as usual. My next conference will be at PyConPakistan where I will speak in December. Am so stocked for this.

Joannah Nanjekye | Joannah Nanjekye | 2017-10-18 00:00:00

When I was a graduate student, an undergrad working in my lab told me that our professor advisor had behaved inappropriately towards her when they were in the lab late at night. I told her that never happened to me and suggested that she speak with a respected female professor in our dept. Later, she told me that the female professor told her that this happens all the time and it’s not a big deal. I was shocked. The undergrad was planning to stay at Carnegie Mellon for her M.S. studies, but instead left for MIT. This happened 14 years ago, and I hope there is improved oversight at universities for these situations. (Just wanted to add that I did not stay in this group long and transferred to a different group to complete my studies.)

Anna Liao | Stories by Anna Liao on Medium | 2017-10-17 18:08:51

New Rick and Morty Supreme Tank Top

New official FREE RICK shirt available now and preorders for our new fancy ass holiday sweater start today as well! 

Julia Lima | Thu thm t uy tn | 2017-10-17 06:41:51

I have been blogging in and out for about 4 years now. There have been several periods of time when I wrote nothing but none as long as this. Like always,it is always that one action to just sit and write that is most difficult. Over time,hitting the “Publish” button doesn’t make me as anxious as it used to.

Lots of things have changed since my last post here. I completed my Bachelors in Engineering this summer and moved cities for internship ( which is still ongoing!). I have met so many amazing people,had adventures and made some great memories and all of this while loving the work that I do! So you know,I have a lot to share. So much to share about the last few months that I don’t even know where to start from.

For now,I will leave this here and share some fond memories later:)


Bhakti Bhikne | Bhakti Bhikne | 2017-10-16 19:08:06

On 7th of October in Prishtina, Kosova’s capital, was hosted the first mini deb conference.
The MiniDebConf Prishtina was an event open to everyone, regardless of their level of knowledge about Debian or other free and open source projects. At MiniDebConf Prishtina there were organized a range of topics incidental to Debian and free software, including any free software project, Outreachy internship, privacy, security, digital rights and diversity in IT.

I was happy to be the first speaker and open the presentations with my talk: “Outreachy”

It was the first MiniDeb conf where naturally 50% of talks were held by women(without having any goals for that number) and it feels always so good when diversity in Free Software events are diverse in any perspective and happens by default.
Part of the event were also a group of women from Prizren ( In August they successfully organized a hackathon with more then 25 women involved. The Mini DebConf was a great environment and opportunity to spread the word for Outreachy and other internships opportunities for women and people from underrepresented groups.
I was not the only one Outreachy alumni in the audience, Renata Gega was also part of the audience and speaker.
We both shared our experience and gave tips on how to make a successful application and how to explore which project was best for them and fit their level of knowledge.
I presented also the work that I did with my mentors and other Mozilla interns in my round, working for the “Diversity and Inclusion” team, how our work was structured and the product we came out with after 3 months and how it is going now.
Personally, I thought that a presentation with this topic would be with a high interest since the call for applications in Outreachy are still open and giving a hand in this moment would be helpful for everyone who aspired to have a spot.

It is definitely one of the talks that I have enjoyed the most, talking about something for which you have been working to improve and empower for the last 4 years is always a wonderful experience, where words can hardly describe the feelings I have when I see women inspired after watching examples that WOMAN CAN DO IT TOO!

See you in the next “Outreachy”  experiences( hopefully next time as a mentor)


Kristi Progri | Kristi Progri | 2017-10-16 09:18:27

I recently went to see Ouaga Girls, suggested to me by my mentor Julia(Thanks Julia! :))

Ouaga Girls is a story about women in Burkina Faso, who study at a feminist education centre. What do they study, you might want to ask? They study to become auto mechanics.

The documentary has moments that make you smile and sigh at the same time. One of those is when the girls push the car and one of them asks “Maybe we should ask for help?” and somebody replies “We shouldn’t have to”. They couldn’t push the car all the way at the end. :(

The struggles that the women go through in the documentary is something we all do. Someone wants to drop out because four years for studying is too long and money is hard to come by. Somebody gets pregnant in the middle of their studies, someone wants to be a singer( but that doesn’t pay bills, does it?).

Routine sexism is so prevalent, that it is not even surprising anymore. The girls are regularly questioned on why they want want to be auto mechanics, how it’s a physical job and they are not able enough. The girls regularly bash them off about how a woman can do anything she wants. Here’s a narrative everyone’s familiar with. In the past, most of the attention has been focused on white collar jobs like engineering, so this is different because if focuses on blue collar jobs, and how breaking into them is equally hard if not more difficult.

My only problem is that movies like these don’t reach the audience that they should reach too. The audience demographic when I went to watch this was 2 men and 11 women. That might not be the best sample size, but experience tells me that I will not be presumptuous in assuming the same ratio everywhere else. Men are half the part of the problem, so they do have to be the part of the solution too. My male peers do not get and outright bash diversity programs, because we do not face any adversity from their POV. Constantly being talked over courtesy of your gender, opinion being valued less etc are very real problems that they can’t seem to acknowledge. Diversity programs become important because they help you feel being a part of the community where your membership is constantly being undermined. I went off on a tangent here, but Ouaga Girls is a movie about how difficult it is for a woman to do a man’s job, and how they are doing it any way while having other dreams, aspirations and hope!

Gargi Sharma | Stories by Gargi Sharma on Medium | 2017-10-14 12:11:29

Gargi Sharma | Stories by Gargi Sharma on Medium | 2017-10-12 20:29:18

“The Designership

Will check that out!

I’ve also been fond of UX Mastery, and the Junior UX Community slack.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-10-12 00:06:13

I’m in the middle of a career change, and turned 40 a week ago. I’ve been working on UX projects in my own time, and trying to get paid work in UX has been quite difficult. I think part of the problem is that there are a huge number of new UXers in the area I’m in, which makes it harder to stick out as worth someone’s time.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-10-11 23:55:08

Finding a mentor is _hard_!

Then again, I’m not entirely clear on what a mentor is supposed to offer a mentee. I’m currently working on a simple project with someone currently in school for technical communications, which feels like mentoring even though I’ve not yet managed to get a paid UX position yet (I’m a career changer).

Just a strange position to be in!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-10-11 23:50:30

This post summaries the Raft consensus algorithm presented in the paper In Search of An Understandable Consensus Algorithm by Diego Ongaro and John Ousterhout.



Raft is a distributed consensus algorithm. One of its important design goals was being understandable. It solves the problem of getting multiple servers to agree on shared of state even in the face of failures. The shared status is usually a data structure supported by a replicated log. We need the system to be fully operational as long as a majority of the servers are up.

Raft works by electing a leader in the cluster. The leader is responsible for accepting client requests and managing the replication of the log to other servers. The data flows only in one direction: from leader to other servers.

Raft decomposes consensus into three sub-problems:

  • Leader Election: A new leader needs to be elected in case of the failure of an existing one
  • Log replication: The leader needs to keep the logs of all servers in sync with its own through replication.
  • Safety: If one of the servers has committed a log entry at a particular index, no other server can apply a different log entry for that index.
Raft ensures these properties are true at all times.


Each server in one of the three states: leader, follower or candidate.

State changes of servers
In normal operation there is exactly one leader and all of the other servers are followers. Followers are passive: they issue no requests on their own but simply respond to requests from leaders and candidates. The leader handles all client requests (if a client contacts a follower, the follower redirects it to the leader). The third state, candidate, is used to elect a new leader.

Raft divides time into terms of arbitrary length, each beginning with an election. If a candidate wins the election, it stays leader for the rest of the term. If the vote is split, then that term ends without a leader. The term number increases monotonically. Each server stores the current term number which is also exchanged in every communication.

.. if one server’s current term is smaller than the other’s, then it updates its current term to the larger value. If a candidate or leader discovers that its term is out of date, it immediately reverts to follower state. If a server receives a request with a stale term number, it rejects the request.

Raft makes use of two RPCs to carry out its basic operation.

  • RequestVotes is used by candidates during elections
  • AppendEntries is used by leaders for replicating log entries and also as a heartbeat (a signal to check if a server is up or not, it doesn’t contain any log entries)

Leader Election

The leader periodically sends a heartbeat to its followers to maintain authority. A leader election is triggered when a follower times out after waiting for a heartbeat from the leader. This follower transitions to the candidate state and increments its term number. After voting for itself, it issues RequestVotes RPC in parallel to others in the cluster. Three outcomes are possible:

  1. The candidate receives votes from the majority of the servers and becomes the leader. It then sends a heartbeat message to others in the cluster to establish authority.
  2. If other candidates receive AppendEntries RPC, they check for the term number. If the term number is greater than their own, they accept the server as the leader and return to follower state. If the term number is smaller, they reject the RPC and still remain candidate.
  3. The candidate neither loses nor wins. If more than one server becomes candidate at the same time, the vote can be split with no clear majority. In this case a new election begins after one of the candidates times out.
Raft uses randomized election timeouts to ensure that split votes are rare and that they are resolved quickly. To prevent split votes in the first place, election timeouts are chosen randomly from a fixed interval (e.g., 150–300ms). This spreads out the servers so that in most cases only a single server will time out; it wins the election and sends heartbeats before any other servers time out. The same mechanism is used to handle split votes. Each candidate restarts its randomized election timeout at the start of an election, and it waits for that timeout to elapse before starting the next election; this reduces the likelihood of another split vote in the new election.

Log Replication:

The client requests are assumed to be write-only for now. Each request consists of a command to be executed ideally by the replicated state machines of all the servers. When a leader gets a client request, it adds it to its own log as a new entry. Each entry in a log:

  • Contains the client specified command
  • Has an index to identify the position of entry in the log. Index starts from 1
  • Has a term number to logically identify when the entry was written

It needs to replicate the entry to all the follower nodes in order to keep the logs consistent. The leader issues AppendEntries RPC to all other servers in parallel. The leader retries this until all followers safely replicate the new entry. When the entry is replicated to a majority of servers by the leader that created it, it is considered committed. All the previous entries including those created by earlier leaders are also considered committed. The leader executes the entry once it is committed and returns the result to the client.

The leader maintains the highest index it knows to be committed in its log and sends it out with the AppendEntries RPCs to its followers. Once the followers find out that the entry has been committed, it applies the entry to its state machine in order.

Raft maintains the following properties, which together constitute the Log Matching Property
• If two entries in different logs have the same index and term, then they store the same command.
• If two entries in different logs have the same index and term, then the logs are identical in all preceding entries.

When sending an AppendEntries RPC, the leader includes the term number and index of the entry that immediately precedes the new entry. If the follower cannot find a match for this entry in its own log, it rejects the request to append the new entry. This consistency check lets the leader conclude that whenever AppendEntries returns successfully from a follower, they have identical logs till the index included in the RPC.

However, the logs of leaders and followers may become inconsistent in the face of leader crashes.

In Raft, the leader handles inconsistencies by forcing the followers’ logs to duplicate its own. This means that conflicting entries in follower logs will be overwritten with entries from the leader’s log.

The leader tries to find the last index where its log matches that of the follower, delete extra entries if any and add the new ones.

The leader maintains a nextIndex for each follower, which is the index of the next log entry the leader will send to that follower. When a leader first comes to power, it initializes all nextIndex values to the index just after the last one in its log.

Whenever AppendRPC returns with a failure for a follower, the leader decrements the nextIndex and issues another AppendEntries RPC. Eventually, nextIndex will reach a value where the logs converge. AppendEntries will succeed when this happens and it can remove extraneous entries (if any) and add new ones from the leaders log (if any). Hence, a successful AppendEntries from a follower guarantees that the leaders log is consistent with it.

With this mechanism, a leader does not need to take any special actions to restore log consistency when it comes to power. It just begins normal operation, and the logs automatically converge in response to failures of the Append-Entries consistency check. A leader never overwrites or deletes entries in its own log.


Raft makes sure that the leader for a term has committed entries from all previous terms in its log. This is needed to ensure that all logs are consistent and the state machines execute the same set of commands.

During a leader election, the RequestVote RPC includes information about the candidate’s log. If the voter finds that its log it more up-to-date that the candidate, it doesn’t vote for it.

Raft determines which of two logs is more up-to-date by comparing the index and term of the last entries in the logs. If the logs have last entries with different terms, then the log with the later term is more up-to-date. If the logs end with the same term, then whichever log is longer is more up-to-date.

Cluster Membership:

For the configuration change mechanism to be safe, there must be no point during the transition where it is possible for two leaders to be elected for the same term. Unfortunately, any approach where servers switch directly from the old configuration to the new configuration is unsafe.

Raft uses a two-phase approach for altering cluster membership. It switches to an intermediate configuration called joint consensus, then once that is committed, it switches over to the new configuration.

The joint consensus allows individual servers to transition between configurations at different times without compromising safety. Furthermore, joint consensus allows the cluster to continue servicing client requests throughout the configuration change.

Joint consensus combines the new and old configurations as follows:

  • Log entries are replicated to all servers in both the configurations
  • Any server from old or new can become the leader
  • Agreement requires separate majorities from both old and new configurations

When a leader receives a configuration change message, it stores and replicates the entry for join consensus C<old, new>. A server always uses the latest configuration in its log to make decisions even if it isn’t committed. When joint consensus is committed, only servers with C<old, new> in their logs can become leaders.

It is now safe for the leader to create a log entry describing C<new> and replicate it to the cluster. Again, this configuration will take effect on each server as soon as it is seen. When the new configuration has been committed under the rules of C<new>, the old configuration is irrelevant and servers not in the new configuration can be shut down.

A fantastic visualization of how Raft works can be found here:


More material such as talks, presentations, related papers and open source implementations can be found here:

Raft Consensus Algorithm

I have dug only into the details of the basic algorithm that make up Raft and the safety guarantees it provides. The paper contains lot more details and it is super approachable as the primary goal of the authors was understandability. I definitely recommend you read it even if you’ve never read any other paper before.

In Search of An Understandable Consensus Algorithm: A Summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-10-11 20:01:24

Officially I finished my internship with Open Australia Foundation, and I already miss working with OAF so much. I learned so much about programing and development, as well as community centred technology, challenges, and most importantly where I stand and where I want to go.

The project I worked on (and will most definitely keep working as a contributor) is councillor data contribution project (we did not find a cooler name for the project in the period of my internship, but I believe it can still happen). The project is about creating a user-friendly system and interface to enable volunteer contributions to keep an Australian local councillor database so that other apps can be fed by the database.

I really like this project because it is about expanding and diversifying the participant base of civic tech. One of the challenges in civic tech is the critical lack of community participation and consultation in the process. It is interesting in some way (not in a good way) when a group of social justice minded developers create apps to solve social issues without consulting the area experts or people who are the most affected by the issue, nor attempting to create a process that lets those people actively participate to be a part of the solution. Deconstructing this divide between tech people and non-tech people is a key to create community-centered technology. And I believe the best way for deconstruction is the construction of something to actually address the challenge better. In the case of our project, to create a system that makes the contribution possible without technical knowledge.

Another thing that is important about this project is the process. If we create something to challenge the issue of exclusion, our development process needs to reflect the value and the goal. Before start coding, we spent about 2 weeks to figure out our design principles and the goal. Although I felt an itch to rush the process to get on coding, I am glad we spent the time to build the solid foundation of the project.

It is truly great for me to work on the project that addresses an issue I personally care for, and even more so because the team I worked with was just unbelievably amazing. Luke and Henare are both very patient, willing to teach, and VERY particular in how we write code. And that‘s what I want from mentors. They definitely exceeded the expectation. I learned a lot not only about rails (the framework we use for the app), but also about communication and version control using git, and reason why it is so important.

I missed the official application deadline because I spent too much time questioning if I should apply or not. I thought I wouldn’t get it anyways because I am not experienced enough. It was a fortune that OAF had an extended deadline, because they joined later. I saw they are using Ruby on Rails, and decided to give it a shot. I am so happy I did as I ended up working with great people for a great project, and I learned tons.

At that time, I was on the fence about giving up a career in programming because no matter what I try, the door doesn’t seem to open for me. And probably I was not settled with the decision myself. The leap I made from community work, art-based education and storytelling was very big, and in previous work I mostly worked with black, indigenous, and people of color who do not shy away from being political because we have to to survive. I was struggling to see myself in the corporate world of tech.

And to be very honest, it is not easy for people like us out there. Even within the carefully worded eligible candidates of Outreachy, (“You identify as a woman (cis or trans), trans man, or genderqueer person (including genderfluid or genderfree)”), we need to acknowledge there are big gaps among these group in terms of access, level of surveillance, and even the resources (i.e. other privileges) to capitalize on these oppressed identity markers (and this happens all the time in the time of diversity industrial complex). There are not that many opportunities out there for us, and these internships are a few of them. This is the first time I felt seen in my experience of tech industry.

Through this internship, I came to realize that technology doesn’t and shouldn’t only belong in the corporate industry, because technology also is a culture. We need to build the culture of technology that is truly inclusive and welcoming, and to work to build technology for social good. I had long forgotten why I started to learn code, and through this internship I remembered. I started to learn code because I wanted to advance my art-based education practice. I wanted to write code to contribute in creating technologies in hands of people who are at the margin. Although there is a lot that needs to be done and change, I also started to see the ways to contribute to nurture the culture of social justice from inside of the industry, and started to meet people who are working for it too.

This internship was life changing. The biggest thank you for Luke Bacon, Henare Degan and Outreachy for making this possible. I will be around to make more contributions in OAF projects!

Hisayo Horie | blog@hisayo | 2017-10-08 04:00:00

#UbuntuRally New York

KDE at #UbuntuRally New York

I was happy to attend Ubuntu Rally last week in New York with Aleix Pol to represent KDE.
We were able toaccomplish many things during this week, and that is a result of having direct contact with Snap developers.
So a big thank you out to Canonical for sponsoring me. I now have all of KDE core applications,
and many KDE extragear applications in the edge channel looking for testers.
I have also made a huge dent in also making the massive KDE PIM snap!
I hope to have this done by week end.
Most of our issue list made it onto TO-DO lists 🙂
So from KDE perspective, this sprint was a huge success!

Scarlett Clark | Home of Scarlett Gately Clark | 2017-10-06 20:50:42

C++'da new ile atanan alanların yönetimini manuel olarak yapmamız gerekiyor.
Garbage collector olmadığı için üretilen nesnelerin delete ile silinmesi gerekiyor.
Bu durum gözden kaçırmalarla dangling pointer, memory leak vb. bellek problemlerine sebep olabilir.

Bellek atamaları konusunda daha temkinli olmak için C++ tarafından önerilen yöntem ise smart pointers (akıllı işaretçiler). Bu işaretçiler aslında bir sınıfın nesneleri olarak üretiliyor ve etki alanlarının dışına çıktıklarında sınıflarında bulunan yıkıcılar (destructor) tarafından otomatik olarak siliniyor. Bu sebeple ürettiğimiz objeleri endişe duymadan kullanabiliriz :). Smart pointer'ların bir diğer amacı ise ham (raw, built-in) işaretçilerle aynı söz dizimine sahip olarak kullanılmaktır (-> ve * operatörleri).

Smart pointer'ları std kütüphanesinden alacağımız için #include <memory> 'yi eklemek yeterli.
Bu yazıda auto_ptr artık kullanılmadığı için shared_ptr, weak_ptr ve unique_ptr 'den bahsedeceğim.

== shared_ptr == 
En kapsamlı akıllı işaretçidir. Kullanımı ham işaretçilere en çok benzeyendir. Bir shared_ptr oluşturulduğunda içerisinde referenced-counter adında iki farklı sayaç tutan bir yapı oluşur. Bu sayaçlardan birisi weak_ptr'leri, diğeri ise shared_ptr'leri tutar.

class Foo { ...... };

shared_ptr<Foo> sp1(new Foo);

Buradaki gibi üretildiğinde; shared_ptr, manager obj. ve managed obj. olmak üzere üç tane nesne oluşur.
Bu shared_ptr'ye yeni spr'ler (shared pointer) eklendikçe share sayacı, weak_ptr'ler eklendikçede weak sayacı artar. Aynı şekilde silindikçe de azalır. Ortamda en az bir spr varsa, managed object hayatta kalır. manager objenin ayakta kalması için ise en az bir wpr (weak pointer) ya da spr'nin olması yeterlidir. Eğer ortamda hiç spr yok ama wpr var ise managed obj. silinir, bu objeye spr eklendiğinde yeninden managed obj. oluşur.
shared_ptr'ler, ham işaretçiler gibi dereference edilebilir (*) ve -> ile fonksiyonlarına erişilebilir.

Bir shared_ptr'ye başka shared pointer'lar ve weak pointer'lar eklendikçe durumu aşağıdaki gibi olacaktır:

class Thing {
void defrangulate();
shared_ptr<Thing> find_something();
shared_ptr<Thing> do_something_with(shared_ptr <Thing> p);
void foo() {
shared_ptr<Thing> spr1(new Thing);
shared_ptr<Thing> spr2 = spr1;
shared_ptr<Thing> spr3(new Thing);
spr1 = find_something();
cout << *spr2 << endl;
spr1.reset(); // decrement count, delete if last
spr2 = nullptr; // convert nullptr,  to an empty shared_ptr and decrement count
Yukarıdaki örnekteki gibi shared_ptr'yi başka bir spr'ye kopyalarak sayacı bir artırabiliriz. Smart pointer'ları delete ile silmeye çalışmak derleme hatasına neden olacaktır ancak .reset() ile sayacı bir azaltabilir ya da başka bir spr olarak atayabiliriz. .use_count() ise aynı managed obj.'ye kaç tane shared_ptr'nin işaret ettiğini gösterir.

Bir smart pointer'ı, direkt olarak raw pointer'a atayamayız.
shared_ptr<Thing> sp;
Thing *raw_ptr = new Thing;
sp = raw_ptr; // compile error!!!!!!!!
Shared pointer'dan, raw ptr'yi elde etmenin tek yolu .get() kullanmaktır.
Kalıtımda da herhangi bir problem olmadan aşağıdaki gibi kullanabiliriz:

class Base {};
class Derived: public Base {};
shared_ptr<Derived> dp1(new Derived);
shared_ptr<Base> bp1 = dp1;
shared_ptr<Base> bp2(dp1);

shared_ptr<Base> bp3(new Derived);

pointer'lar arasında dönüştürme işlemini static_pointer_cast kullanarak yapabiliriz.
shared_ptr <Base> baseptr(new Base);
shared_ptr<Derived> derived_ptr;
derived_ptr = static_pointer_cast<Derived>(base_ptr);
Artık derived_ptr, base_ptr oldu.

İki shared_ptr'yi ==, != operatörleri ile karşılaştırabiliriz. Aynı şekilde if (spr) {..} true dönerse, spr bir objeye işaret ediyor demektir.

Örn. spr3.reset(); dedikten sonra .use_count() sıfır döndürmeye başladıysa, if (spr3) {...} false dönecektir.

Not1: Aynı nesneye işaret eden raw ve smart pointer'ı aynı anda kullanmak tavsiye edilmez, aynı nesneyi iki kez silmeye ya da dangling pointer'a sebep olabilir.

Not2: shared pointer üretirken new ile üretmek iki kere alan tahsisi yapmaya neden olacak ve daha maliyetli olacaktır (İlki new için, ikincisi shared pointer için). Bunun yerine make_shared ifadesini kullanarak bu büyük alan tahsisini tek seferde yapmaya indiregeyebiliriz.

shared_ptr <Thing> spr(new Thing); // two allocations
shared_ptr <Thing> sp(make_shared<Thing>()); // only one allocation

== weak_ptr ==
weak_ptr'ler ancak başka bir weak_ptr ya da shared_ptr sayesinde var olabilirler. Ait oldukları nesnenin yaşam süresine etki etmezler. Dereference (*) etme ve -> operatörü weak_ptr için tanımlı değildir. Bu sebeple weak_ptr'ler zayıf referanslardır (weak reference), genelde ait oldukları shared_ptr'nin hala geçerli olup olmadığını kontrol etmek ya da shared_ptr'ler arasında döngü oluşturan referansları (cycle reference) kaldırmak için kullanılırlar.
.reset() ile içi boşaltılabilir ancak nullptr ataması yapmak mümkün değildir.
weak_ptr'den shared_ptr elde etmek için .lock() kullanılır.
shared_ptr<Thing> sp2 = wp2.lock();
shared_ptr<Thing> sp(new Thing);
weak_ptr<Thing> wp1(sp); // construct wp1 from a shared_ptr
weak_ptr<Thing> wp2; // an empty weak_ptr - points to nothing
wp2 = sp; // wp2 now points to the new Thing
weak_ptr<Thing> wp3(wp2); // construct wp3 from a weak_ptr
weak_ptr<Thing> wp4 = wp2; // wp4 now points to the new Thing

shared_ptr konusunda gördüğümüz shared pointer'ları birbirine atama işlemi döngü referanslarına sebep olmaz. Bu sebeple weak_ptr kullanmak gerekmez.
shared_ptr<Thing> p1(new Thing);
shared_ptr<Thing> p2 = p1;
shared_ptr<Thing> p3 = p2;
shared_ptr<Thing> p4 = p3;

Yukarıda oluşan sadece aynı nesneye referans eden birden çok pointer olmasıdır. Döngü referansları için buradaki örneği inceleyebilirsiniz. Burada örnek çok açık ve temel görünüyor. Muhtemelen çok daha büyük kod bloklarında daha karmaşık bir şekilde döngülerin oluşmasına sebep olunuyordur diye düşünüyorum :). Bu sebeple smart pointer'ları kullanırken dikkatli olmak gerek.

weak_ptr'ler ile dangling pointer'lara engel olma:
shared_ptr<int> sptr;
*sptr = 10;
weak_ptr<int> wptr;
wptr = sptr;
if (wptr.lock()) { ... }
 // burada direkt olarak sptr üzerinden
 //kullansaydık içeriği boş olan bir pointer kullanmış olacaktık

weak pointer'larda önemli durumlardan bir diğeri ise this objesinin kullanımı. Tamamen ham işaretçilerle kullanımında bir problem çıkmazken, akıllı işaretçilerde problem oluşabilir.
class Thing
        Thing() {cout << "Thing....\n";}
        ~Thing() {cout << "~Thing....\n";}
        void foo();

void set_x(shared_ptr<Thing> ptr) {

void Thing::foo()
    cout << "Thing::foo() ... \n";
    shared_ptr<Thing> sp(this);

int main()
  shared_ptr<Thing> t2(new Thing);
Burada this raw objesinden yeni bir shared_ptr daha üretiliyor ve bu raw obje aslında daha önceden başka bir shared pointer için kullanılmış. Bu durumda this'in shared_ptr'si silindiğinde, t2 de silinmiş olacak ve t2 silinmeye çalışıldığında zaten silinmiş bir obje tekrar silinmeye çalışılıyor olacak..
Bunu önlemek için kullanılan ifade ise: enable_shared_from_this
class Thing: public enable_shared_from_this()
        Thing() {cout << "Thing....\n";}
        ~Thing() {cout << "~Thing....\n";}
        void foo();
void Thing::foo()
    cout << "Thing::foo() ... \n";
    shared_ptr<Thing> sp = shared_from_this();

== unique_ptr ==
unique_ptr'de etki alanının dışına çıkıldığında işaret edilen obje silinir. shared_ptr'ye biraz benziyor ancak referenced-counter yok, bu sebeple daha az maliyetli. Sahiplik hakkı ise objeye özel, paylaşımlı bir yapı yok. Copy construction ve copy assignment olmadığı için bir unique_ptr'yi başka bir unique_ptr'ye atayamayız ya da kopyalamayız ancak sahipliği fonksiyondan geri döndürerek bir başka işaretçiye taşıyabiliriz.
unique_ptr<Thing> p1(new Thing);
unique_ptr<Thing> p2(p1); // error, copy construction not allowed
unique_ptr<Thing> p3;
p3 = p1; // error, copy assignment not allowed

unique_ptr<Thing> create_unique_ptr()
  unique_ptr<Thing> local_ptr(new Thing);
  return local_ptr;

unique_ptr<Thing> uptr;
uptr = create_unique_ptr();

Ya da move() fonksiyonu kullanarak da sahiplik hakkı taşıyabiliriz.


Ebru Akagunduz | Ebru A.k.a.gndz | 2017-10-04 09:27:22

No matter how we try to make sense of it, having children is irrational. It defies what we conceive as logic. Something that we won't ever admit to ourselves. Especially since we almost always view childbearing and raising through such terribly rose-tinted glasses. To have someone totally dependent on you, both emotionally and physically, for their every need. For whom you shall be by turns criminal and detective, defender and prosecutor, judge, jury and accountability committee. Despite the fulfillment we find in seeing a child at peace, loved, well fed, and clothed, knowing it is our hard work that has made this possible, it is not enough to look at this (as a by-stander) and say, "Yes, I would love to have children, if only to have this feeling". The demands and the joys do not balance out as often as you thought they would and it is a while before you break-even.

If you stopped and looked at it all: the sleep deprivation, worry, anxiety, and tears; the joys, hopes, longings, and desires that you pin on your child. The pride, contentment and satisfaction. The potent emotional cocktail that you take every day you're with them. The changes that happen to you and continue to happen, and you don't know whether to resist or be swallowed whole. The struggle as you grapple with growth that leaves you outwardly unchanged but separates you from the years of your youth. Life's questions now demand certain answers. No longer can you leave contradicting facts to remain easy bedfellows; every moment, without knowing, you are choosing. Choosing the story you will tell. The narrative you will shape them with. We want to give them certainty. Security. We want them to learn the lessons without paying the price. Somehow. Or if they must suffer, let it be when our eyes are closed and our hearts are numb, and it is beyond us to feel anything.

A better way is acknowledge the having of children as a biological drive. Trying to come up with reasons for it is futile. We want to have children because the genetic material that we carry wants to live. To survive. To make as many copies as it can of itself. In a sense, it longs for immortality. And the entire procedure we have come up with for having children: find a suitable mate, make a home in which your children can be raised, and then have said children, simply enables our DNA to have a higher chance of surviving (your children growing to adulthood) and making more copies of itself. Do not attempt to rationalize, come up with reasons. To sugercoat. Acknowledge the madness of it all. The awkward balance you're trying to keep. Take it one day at a time and do your best. Mourn who you used to be and accept what you are becoming. The two go together. They both live inside you.

I will end this with a quote from Khalil Gibran's wonderful book, The Prophet:

"Your children are not your children. They are the sons and daughters of Life's longing for itself. They come through you but not from you, And though they are with you yet they belong not to you.

You may give them your love but not your thoughts, For they have their own thoughts. You may house their bodies but not their souls, For their souls dwell in the house of tomorrow, which you cannot visit, not even in your dreams. You may strive to be like them, but seek not to make them like you. For life goes not backward nor tarries with yesterday..."

Leni Kadali | Memoirs Of A Secular Nun | 2017-10-03 06:54:44

This post will distill the contents of the paper Viewstamped Replication Revisited by Barbara Liskov and James Cowling.

This paper presents an updated version of Viewstamped Replication, a replication technique that handles failures in which nodes crash. It describes how client requests are handled, how the group reorganizes when a replica fails, and how a failed replica is able to rejoin the group.


The Viewstamped Replication protocol, referred to as VR, is used for replicated services that run on many nodes known as replicas. VR uses state machine replication: it maintains state and makes it accessible to the clients consuming that service. Some features of VR:

  • VR is primarily a replication protocol, but it provides consensus too.
  • VR doesn’t use any disk I/O, it uses replicated state for persistence
  • VR deals only with crash failures: a node is either functioning or it completely stops
  • VR works in an asynchronous network like the internet where nothing can be concluded about a message that doesn’t arrive. It may be lost, delivered out of order or delivered many times.

Replica Groups

VR ensures reliability and availability when no more than a threshold of f replicas are faulty. It does this by using replica groups of size 2f + 1; this is the minimal number of replicas in an asynchronous network under the crash failure model.

The simple proof for the above statement is as follows: in a system with f crashed nodes , we need at least the majority of f+1 nodes that can mutually agree to keep the system still functioning.

A group of f+1 replicas is often known as a quorum. The protocol needs the quorum intersection property to be true to work correctly. This property states that:

The quorum of replicas that processes a particular step of the protocol must have a non-empty intersection with the group of replicas available to handle the next step, since this way we can ensure that at each next step at least one participant knows what happened in the previous step.


VR architecture

The architecture of VR is as follows:

  1. The user code is run on client machines on top of a VR proxy
  2. The proxy communicates with the replicas to carry out the operations requested by the client. It returns the computed results from the replicas back to the client
  3. The VR code on the side of the replicas accepts client requests from the proxy, executes the protocol and executes the request by making an up-call to the service code
  4. The service code returns the result to the VR code which in turn sends a message to the client proxy that requested the operation


The challenge for the replication protocol is to ensure that operations execute in the same order at all replicas in spite of concurrent requests from clients and in spite of failures.

If all the replicas should end in the same state, it is important that the above condition is met.

VR deals with the replicas as follows:

Primary: Decides the order in which the operations will be executed

Secondary: Carry out the operations in the same order as selected by the primary

What if the primary fails?

  • VR allows different replicas to assume the role of primary if it fails over time
  • The system moves through a series of views. In each view, one replica assumes the role of primary
  • The other replicas watch the primary. If it appears to be faulty then they carry out a view-change to select a new primary

We consider the following three scenarios of the VR protocol:

  • Normal case processing of user requests.
  • View changes to select a new primary.
  • Recovery of a failed replica so that it can rejoin the group.

VR protocol

State of VR at a replica

The state maintained by each replica is presented in the figure above. Some points to note:

  • The identity of the primary isn’t stored but computed using the view number and the configuration.
  • The replica with the smallest IP is replica 1 and so on.

The client side proxy also maintains some state:

  • It records the configuration
  • It records the current view number to track the primary
  • A client id and an incrementing client request number

Normal Operation

  • Replicas participate in processing of client requests only when their status is normal
  • Each message sent contains the sender’s view number. Replicas process only those requests which have a view number that matches what they know. If the sender replica is ahead, it drops the message. If it’s behind, it performs a state transfer.
Normal mode operation

The normal operation of VR can be broken down into the following steps:

  1. The client sends a REQUEST message to the primary asking it to perform some operation, passing it the client-id and the request number
  2. When the primary cross checks the info present in the client table. If the request number is smaller than the one present in the table, it discards it. It re-sends the response if the request was the most recently executed one.
  3. The primary increases the op-number, appends the request to its log and updates the client table with the new request number. It sends a PREPARE message to the replicas with the current view-number, the operation-number, the client’s message, and the commit-number (the operation number of the most recently committed operation)
  4. The replicas won’t accept a message with an op-number till they have all operations preceding it. They use state transfer to catch up if required. Then they add the operation to their log, update the client table and send a PREPAREOK message to the primary. This message indicates that the operation including all the preceding ones have been prepared successfully.
  5. The primary waits for a response from f replicas before committing the operation. It increments the commit-number. After making sure all operations preceding the current one have been executed, it makes an up-call to the service code to execute the current operation. A REPLY message is sent to the client containing the view-number, request-number, and the result of the up-call.

Usually the PREPARE message is used to inform the backup replicas of the committed operations. It can also do so by sending a COMMIT message. To execute a request, a backup has to make sure that the operation is present in its log along with all the previous operations have been executed. Then it executes the said operation, increments its commit-number, updates the client’s entry in the client-table, but doesn’t send a reply to the client as the primary has already done that.

If a client doesn’t receive a timely response to a request, it re-sends the request to all replicas. This way if the group has moved to a later view, its message will reach the new primary. Backups ignore client requests; only the primary processes them.

View change operation

Backups monitor the primary: they expect to hear from it regularly. Normally the primary is sending PREPARE messages, but if it is idle (due to no requests) it sends COMMIT messages instead. If a timeout expires without a communication from the primary, the replicas carry out a view change to switch to a new primary

There is no leader election in this protocol. The primary is selected in a round robin fashion. Each member has a unique IP address. The next primary is the backup replica with the smallest IP that is functioning. Each number in the group is already aware of who is expected to be the next primary.

Every executed operation at the replicas must survive the view change in the order specified when it was executed. The up-call is carried out at the primary only after it receives f PREPAREOK messages. Thus the operation has been recorded in the logs of at least f+1 replicas (the old primary and f replicas).

Therefore the view change protocol obtains information from the logs of at least f + 1 replicas. This is sufficient to ensure that all committed operations will be known, since each must be recorded in at least one of these logs; here we are relying on the quorum intersection property. Operations that had not committed might also survive, but this is not a problem: it is beneficial to have as many operations survive as possible.
  1. A replica that notices the need for a view change advances its view-number, sets its status to view-change and sends a START-VIEW-CHANGE message. A replica identifies the need for a view change based on its own timer or because it receives a START-VIEW-CHANGE or a DO-VIEW-CHANGE from others with a view-number higher than its own.
  2. When a replica receives f START-VIEW-CHANGE messages for its view-number it sends a DO-VIEW-CHANGE to the node expected to be the primary. The messages contains the state of the replica: the log, most recent operation-number and commit-number, and the number of the last view in which its status was normal.
  3. The new primary waits to receive f+1 DO-VIEW-CHANGE messages from the replicas (including itself). Then it updates its state to the most recent based on the info from replicas (see paper for all rules). It sets its number as the view-number in the messages, changes its status to normal. It informs all other replicas by sending a STARTVIEW message with the most recent state including the new log, commit-number and op-number.
  4. The primary can now accept client requests. It executes any operations committed operations and sends the replies to clients.
  5. When the replicas receive a STARTVIEW message, they update their state based on the message. They send PREPAREOK messages for all uncommitted operations present in their log after the update. They execute these operations to to be in sync with the primary.

To make the view change operation more efficient, the paper describes the following approach:

The protocol described has a small number of steps, but big messages. We can make these messages smaller, but if we do, there is always a chance that more messages will be required. A reasonable way to get good behavior most of the time is for replicas to include a suffix of their log in their DO-VIEW-CHANGE messages. The amount sent can be small since the most likely case is that the new primary is up to date. Therefore sending the latest log entry, or perhaps the latest two entries, should be sufficient. Occasionally, this information won’t be enough; in this case the primary can ask for more information, and it might even need to first use application state to bring itself up to date.


When a replica recovers after a crash it cannot participate in request processing and view changes until it has a state at least as recent as when it failed. If it could participate sooner than this, the system can fail.

The replica should not “forget” anything it has already done. One way to do this to persist the state on disk but this will slow down the whole system. This isn’t necessary in VR because the state is persisted at other replicas and can be obtained by using a recovery protocol provided that the replicas are failure independent.

When a node comes back up after a crash it sets its status to recovering and carries out the recovery protocol. While a replica’s status is recovering it does not participate in either the request processing protocol or the view change protocol.

The recovery protocol is as follows:

  1. The recovering replica sends a RECOVERY message to all other replicas with a nonce.
  2. If the replica’s status is normal, only then it replies to the recovering replica with a RECOVERY-RESPONSE messages containing its view number and the nonce it received. If it’s the primary, it also send its log, op-number and commit-number.
  3. When the replica has received f+1 RECOVERY-RESPONSE message, including one from the primary, it updates it state and changes its status to normal.
The protocol uses the nonce to ensure that the recovering replica accepts only RECOVERY-RESPONSE messages that are for this recovery and not an earlier one.


Reconfiguration deals with epochs. The epoch represents the group of replicas processing client requests. If the threshold for failures, f, is adjusted, the system can either add/remove replicas and transition to a new epoch. It keeps track of epochs through the epoch-number.

Another status, namely transitioning is used to signify that a systems is moving between epochs.

The approach to handling reconfiguration is as follows. A reconfiguration is triggered by a special client request. This request is run through the normal case protocol by the old group. When the request commits, the system moves to a new epoch, in which responsibility for processing client requests shifts to the new group. However, the new group cannot process client requests until its replicas are up to date: the new replicas must know all operations that committed in the previous epoch. To get up to date they transfer state from the old replicas, which do not shut down until the state transfer is complete.

The VR sub protocols need to be modified to deal with epochs. A replica doesn’t accept messages from an older epoch compared to what it knows, i.e., with an older epoch-number. It informs the sender about the new epoch.

During a view-change, the primary cannot accept client requests when the system is transitioning between epochs. It does to by checking if the topmost request in its log is a RECONFIGURATION request. A recovering replica in an older epoch is informed of the epoch if it is part of the new epoch or it shuts down.

The issue that comes to mind is that the client requests can’t be served while the system is moving to a new epoch.

The old group stops accepting client requests the moment the primary of the old group receives the RECONFIGURATION request; the new group can start processing client requests only when at least f + 1 new replicas have completed state transfer.

This can be dealt with by “warming up” the nodes before reconfiguration happens. The nodes can be brought up-to-date using state transfer while the old group continues to reply to client requests. This reduces the delay caused during reconfiguration.

This paper has presented an improved version of Viewstamped Replication, a protocol used to build replicated systems that are able to tolerate crash failures. The protocol does not require any disk writes as client requests are processed or even during view changes, yet it allows nodes to recover from failures and rejoin the group.
The paper also presents a protocol to allow for reconfigurations that change the members of the replica group, and even the failure threshold. A reconfiguration technique is necessary for the protocol to be deployed in practice since the systems of interest are typically long lived.

Viewstamped Replication Revisited: A Summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-10-02 18:29:54

Drum roll…..!!! We released Lightbeam yesterday and I am super proud of my Outreachy work. You can find Lightbeam at the Mozilla add-ons page.

What is new?

Below are the highlights of Lightbeam 2.0.1:

  • Re-written from scratch to adapt to the browser web-extension API
  • Asynchronous data capture and storage
  • IndexedDB for data storage (data is stored only locally and not shared)
  • Performance first
  • The graph visualisation is canvas based, thereby improving the performance
  • ES6 features – await/async, proxy objects, arrow functions, template strings
  • Responsive UI with CSS grid and flexbox
  • D3.js for the force layout of the graph visualisation including drag, zoom and pan
  • Dexie.js for IndexedDB


What’s next?

Outreachy has been the best gift I have gifted myself. I highly recommend this program.

Stay tuned for more updates! Do give Lightbeam a try ^_^


Princiya Marina Sequeira | P's Blog | 2017-09-29 11:59:00

This blog documents the Java suite that I worked on during my outreachy internship with Ceph for anyone interested in building on to the work.

The test suite tests Ceph’s Rados Gate Way S3 interface.


The tests are written in Java programming language and make use of the AWS Java SDK to perform the s3 operations.

The TestNG test framework is used for implementing the tests and gradle to automate running of the tests to suite CLI environments.

The test configuration data is kept in the properties file.

Implemented Tests

About 150 tests are currently implemented in this suite. I will share the details on the specifics.

Bucket Operations

Most of the bucket operations like Create, update, read and delete for most scenarios are tested with all possible headers:

  • General CRUD of buckets.
  • Bucket Lifecycle configuration
  • Bucket Creation with all possible headers.

Object operations

Most of the cases for object operations were tested including;

  • General Create, Read, Update, Delete scenarios
  • Writing/Reading objects with all possible headers.
  • Getting and setting metadata.
  • Getting objects with delimiters, maxkeys, prefix, markers
  • Reading and writing objects with server side encryption (SSE) and SSEKMSKeyId. + Reading andwriting objects with SSECustomerAlgorithm, SSECustomerKey and SSECustomerKeyMD5.
  • Multipart uploads and transfer.


RGW has shown some inconsistencies when performing AWS4 operations before so tests are implemented for these unique cases and include:

  • Multipart Uploads of varying sizes from small to big files for both LLAPI and HLAPI.
  • Multipart Copy of varying sizes from small to big files for both LLAPI and HLAPI.
  • File and directory uploads of small to big files for both LLAPI and HLAPI.
  • Object Read and Create with all possible headers for both LLAPI and HLAPI.

What is Pending

There are still more scenarios that arent yet tested some specific to the AWS GO SDK but some are critical cases especially experienced on RGW. The oustanding tests pending are:

  • Host style as elaborate in this issue.
  • Versioning
  • Multipart Uploads/Copy/Transfer with SSE and KMS

Possible Changes

I will make a couple of refactors to make style guide consistencies and remove obvious duplication of methods and other clean ups. I willl also add docmentation that I had overlooked.

The branch shall be merged to master after we have done rigorous testing with Ali Maredia the mentor of the project.

I am also revisiting testing utility methods on second thought.

Going forth

I will keep commiting to this project may not be daily because I have to concetrate on school now . I will keep in touch with the mentors Ali Maredia and Marcus Watts on any changes I will have made or need to make.

Joannah Nanjekye | Joannah Nanjekye | 2017-09-27 00:00:00

This blog documents the Go suite that I worked on during my outreachy internship with Ceph for anyone interested in building on to the work.

The test suite tests Ceph’s Rados Gate Way S3 interface.


The tests are written in Go programming language and make use of the AWS Go SDK to perform the s3 operations.

We use the builtin go testing package for the tests but we also make use of testify for assertions and creating suites.

The test configuration data is kept in a toml file and viper used to load and read the configuration data. Viper is used due to its flexibility in the configuration types used. We load the configuration data by reading the config file by name without its extension. This accomodates any configuration type i.e yaml , toml etc. Other than this, viper avails us a less complex way using the dot notation rather than using structs when reading config data.

Implemented Tests

About 150 tests are currently implemented in this suite. I will share the details on the specifics.

Bucket Operations

Most of the bucket operations for most scenarios are tested and include:

  • General CRUD of buckets.
  • Bucket ACL
  • Bucket Lifecycle configuration
  • Bucket Creation with all possible headers

Object operations

Most of the cases for object operations were tested including;

  • General Create, Read, Update, Delete scenarios
  • Writing/Reading objects with all possible headers.
  • Getting and setting metadata.
  • Getting objects with delimiters, maxkeys, prefix, markers
  • Reading and writing objects with server side encryption (SSE) and SSEKMSKeyId. + Reading andwriting objects with SSECustomerAlgorithm, SSECustomerKey and SSECustomerKeyMD5.
  • Multipart uploads and transfer.

What is Pending

There are still more scenarios that arent yet tested some specific to the AWS GO SDK but some are critical cases especially experienced on RGW. The oustanding tests pending are:

  • Host style as elaborate in this issue.
  • Versioning
  • Multipart Uploads with SSE and KMS
  • AWS4

Possible Changes

I have been working on and intend to push commits on a new branch for now. The branch shall contain mainly refactors and change in dependency management.

The refactors shall affect mostly some utility methods to use interfaces and reduce some duplication of methods and other clean ups. I will also make foldering changes adding a few more documentation files.

To manage dependecies, I will change to using dep.

I am also revisiting testing utility methods though I had flagged this off earlier as not making sense but later noticed tests need to depend on reliable utility methods.

The branch shall be merged to master after we have done rigorous testing with Ali Maredia the mentor of the project.


I will keep commiting to this project though it may not be daily because I have to concetrate on school now . I will keep in touch with the mentors Ali Maredia and Marcus Watts on any changes I will have made or need to make.

Joannah Nanjekye | Joannah Nanjekye | 2017-09-27 00:00:00

This post distills the content presented in the paper “Some Constraints & Trade-offs In The Design of Network Communications” published in 1975 by E. A. Akkoyunlu et al.

This paper focuses on the inclusion of Inter Process Communication (IPC) primitives and the consequences of doing so. It explores, in particular, the time-out and the insertion property feature described in detail below with respect to distributed systems of sequential processes without system buffering & interrupts.

It also touches upon the two generals problem which states that it’s impossible for two processes to agree on a decision over an unreliable network.


The design of an Inter Process Communication Mechanism (IPCM) can be described by stating the behavior of the system & the required services. The features to be included in the IPCM are very critical as they might be interdependent, hence the design process should begin with a detailed spec. This involves thorough understanding of the consequences of each decision.

The major aim of the paper is to point out the interdependence of the features to be incorporated in the system.

The paper states that at times the incompatibility between features is visible from the start. Yet, sometimes two features which seem completely unrelated end up affecting each other significantly. If the trade-offs involved aren’t explored at the beginning, it might not be possible to include desirable features. Trying to accommodate conflicting features results into messy code at the cost of elegance.

Intermediate Processes:

Let’s suppose a system doesn’t allow indirect communication between processes that cannot establish a connection. The users just care about the logical sender and receiver of the messages: they don’t care what path the messages take or how many processes they travel through to reach their final destination. In such a situation, intermediate processes come to our rescue. They’re not a part of the IPCM but are inserted between two processes that can’t communicate directly through a directory or broker process when the connection is set up. They’re the only ones aware of the indirect nature of communication between the processes.

Centralized vs Distributed Systems:

Centralized Communication Facility

  1. Has a single agent which is able to maintain all state information related to the communication happening in the system
  2. The agent can also change the state of the system in a well-defined manner

For example, if we consider the IPCM to be the centralized agent, it’ll be responsible for matching the SEND & RECEIVE requests of two processes, transferring data between their buffers and relaying appropriate status to both.

Distributed Communication Facility

  1. No single agent has the complete state information at any time
  2. The IPCM is made of several individual components which coordinate, exchange and work with parts of state information they possess.
  3. A global change can take a considerable amount of time
  4. If one of the components crashes, the activity of other components still interests us

Case 1:

In Figure 1, P1 and P2 are the two communicating processes on different machines over a network with their own IPCMs and P is the interface which enables this, with parts that lie on both machines. P handles the details of the network lines.

If one machine or a communication link crashes, we want the surviving IPCM’s to continue their operation. At least one component should detect a failure and be able to communicate. (In the case of a communication link failure, both ends must know.)

Case 2:

Distributed communication can also happen on the same machine given that there are one or more intermediate processes taking part in the system. In that case, P, P1 & P2 will be processes on the same system with identical IPCMs. P is an intermediate processes which facilitates the communication between P1 & P2.

Transactions between P1 & P2 consist of two steps: P1 to P and P to P2. Normally, the status returned to P1 would reflect the result of the P1 to P transfer, but P1 is interested in the status of the over all transaction from P1 to P2. One way to deal with this is a delayed status return. The status isn’t sent to the sender immediately after the transaction occurs but only when the sender issues a SEND STATUS primitive. In the example above, after receiving the message from P1, P further sends it to P2, doesn’t send any status to P1 & waits to receive a status from P2. When it receives the appropriate status from P2, it relays it to P1 using the SEND STATUS primitive.

Special Cases of Distributed Facility

This section starts out by stating some facts and reasoning around them.

FACT 0: A perfectly reliable distributed system can be made to behave as a centralized system.

Theoretically, this is possible if:

  1. The state of different components of the system is known at any given time
  2. After every transaction, the status is relayed properly between the processes through their IPCMs using reliable communication.

However, this isn’t possible in practice because we don’t have a perfect reliable network. Hence, the more realistic version of the above fact is:

FACT I: A distributed IPCM can be made to simulate a centralized system provided that:
1. The overall system remains connected at all times, and
2. When a communication link fails, the component IPCM’s that are connected to it know about it, and
3. The mean time between two consecutive failures is large compared to the mean transaction time across the network.

The paper states that if the above conditions are met, we can establish communication links that are reliable enough to simulate a centralized systems because:

  1. There is always a path from the sender to the receiver
  2. Only one copy of an undelivered message will be retained by the system in case of a failure due to link failure detection. Hence a message cannot be lost if undelivered and will be removed from the system when delivered.
  3. A routing strategy and a bound on the failure rate ensures that a message moving around in a subset of nodes will eventually get out in finite time if the target node isn’t present in the subset.

The cases described above are special cases because they make a lot of assumptions, use inefficient algorithms and don’t take into account network partitions leading to disconnected components.

Status in Distributed Systems

Complete Status

A complete status is one that relays the final outcome of the message, i.e., whether it reached its destination.

FACT 2: In an arbitrary distributed facility, it is impossible to provide complete status.

Case 1:

Assume that a system is partitioned into two disjoint networks, leaving the IPCMs disconnected. Now, if IPCM1 was awaiting a status from IPCM2, there is no way to get it and relay the result to P1.

Case 2:

Consider figure 2, if there isn’t a reliable failure detection mechanism present in the system and IPCM2 sends a status message to IPCM1, then it can never be sure it reached or not without an acknowledgement. This leads to an infinite exchange of messages.


Time-outs are required because the system has finite resources and can’t afford to be deadlocked forever. The paper states that:

FACT 3: In a distributed system with timeouts, it is impossible to provide complete status (even if the system is absolutely reliable).

In figure 3, P1 is trying to send P2 a message through a chain of IPCMs.

Suppose if I1 takes data from P1 but before it hears about the status of the transaction, P1’s request times out. IPCM1 has now knowledge about the final outcome whether the data was successfully received by P2. Whatever status it returns to P1, it may prove to be incorrect. Hence, it’s impossible to provide complete status in a distributed facility with time-outs.

Insertion Property

An IPCM has insertion property if we insert an intermediate process P between two processes P1 & P2 that wish to communicate such that:

  1. P is invisible to both P1 & P2
  2. The status relayed to P1 & P2 is the same they’d get if directly connected
FACT 4: In a distributed system with timeouts, the insertion property can be possessed only if the IPCM withholds some status information that is known to it.

Delayed status is required to fulfill the insertion property. Consider that the message is sent from P1 to P2. What happens if P receives P1’s message, it goes into await-status state but it times out before P could learn about the status?

We can’t tell P1 the final outcome of the exchange as that’s not available yet. We also can’t let P know that it’s in await-status state because that would mean that the message was received by someone. It’s also not possible that P2 never received the data because such a situation cannot arise if P1 & P2 are directly connected & hence violates the insertion property.

The solution to this is to provide an ambiguous status to P1, one that is as likely to be possible if the two processes were connected directly.

Thus, a deliberate suppression of what happened is introduced by providing the same status to cover a time-out which occurs while awaiting status and, say, a transmission error.

Logical & Physical Messages

The basic function of an IPCM is the transfer and synchronization of data between two processes. This may happen by dividing the physical messages originally sent by the sender process as a part of a single operation into smaller messages, also known as logical message for the ease of transfer.

Buffer Size Considerations

As depicted in figure 5, if a buffer mismatch arises, we can take the following approaches to fix it:

  1. Define a system-wide buffer size. This is extremely restrictive, especially within a network of heterogenous systems
  2. Satisfy the request with the small buffer size & inform both the processes involved what happened. This approach requires that the processes are aware of the low level details of the communication.
  3. Allowing partial transfers. In this approach, only the process that issued the smaller request (50 words) is woken up. All other processes remain asleep awaiting further transfers. If the receiver’s buffer isn’t full, an EOM (End Of Message) indicator is required to wake it up.

Partial Transfers and Well-Known Ports

In figure 6, a service process using a well-known port is accepting requests for sever user processes, P1…Pn. If P1 sends a message to the service process that isn’t complete and doesn’t fill its buffer, we need to consider the following situations:

  1. The well-known port is reserved for P1. No other process can communicate with the service process using it till P1 is done.
  2. When the service process times out while P1 is preparing to send the second and final part of the message, we need to handle it without informing P1 that the first part has been ignored. P1 isn’t listening for incoming messages from the service process.

Since none of these problems arise without partial transfers, one solution is to ban them altogether. For example:

This is the approach taken in ARPANET where the communication to well known ports are restricted to short, complete messages which are used to setup a separate connection for subsequent communication.

Buffer Processes

This solution is modeled around the creation of dynamic processes.

Whenever P1 wishes to transfer data to the service process, a new process S1 is created and receives messages from P1 till the logical message is completed, sleeping as and when required. Then it sends the complete physical message to the service process with EOM flag set. Thus no partial transfers happen between S1 and the service process, they’re all filtered out before that.

However, this kind of a solution isn’t possible with well-known ports. S1 is inserted between P1 and the service process when the connection is initiailized. However, in the case of well-known ports, no initialization takes place.

In discussing status returned to the users, we have indicated how the presence of certain other features limits the information that can be provided.
In fact, we have shown situations in which uncertain status had to be returned, providing almost no information as to the outcome of the transaction.

Even though the inclusion of insertion property complicates things, it is beneficial to use the weaker version of it.

Finally, we list a set of features which may be combined in a working IPCM:
(1) Time-outs
(2) Weak insertion property and partial transfer
(3) Buffer processes to allow
(4) Well-known ports — with appropriate methods to deal with partial transfers to them.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-09-25 18:47:31

A Note on Distributed Systems: A Summary

This post distills the material presented in the paper titled “A Note on Distributed Systems” published in 1994 by Jim Waldo et al.

The paper presents the differences between local & distributed computing in the context of Object Oriented Programming, explaining why treating them to the be same is incorrect and leads to applications that aren’t robust or reliable.


The paper kicks off by stating that the current work in distributed systems is modeled around objects, more specifically, a unified view of objects: objects are defined by their supported interfaces & the operations they support. Naturally, this can be extended to imply that objects in the same address space, or in a different address space on the same machine, or on a different machine all behave in a similar manner: their location is an implementation detail.

Let’s define the most common terms in this paper:

Local Computing:

It deals with programs that are confined to a single address space only.

Distributed Computing:

It deals with programs that can make calls to objects in different address spaces either on the same machine or on a different machine.

The Vision of Unified Objects:

Implicit in this vision is that the system will be “objects all the way down”; that is, that all current invocations or calls for system services will be eventually converted into calls that might be to an object residing on some other machine. There is a single paradigm of object use and communication used no matter what the location of the object might be.

This refers to the assumption that all objects are defined only in terms of their interfaces. Their implementation which also includes location of the object (remote/local), is independent of their interfaces and is hidden from the programmer. Hence, as far the programmer is concerned, they write the same type of call for every object, whether local or remote and the system takes care of actually sending the message by figuring out the underlying mechanisms which aren’t visible to the programmer writing the application.

The hard problems in distributed computing are not the problems of how to get things on and off the wire.

The paper goes on to define what are the toughest challenges of building a distributed systems:

  1. Latency
  2. Memory Access
  3. Partial failure & concurrency

Ensuring reasonable performance while dealing with all of the above doesn’t make the life of the a distributed systems engineer any easier. Moreover, the lack of any central resource or state manager just adds on to the various challenges. Let’s observe each of these one by one.


This is perhaps the fundamental difference between local & distributed object invocation. The paper claims that a remote call is four to five times slower than a local call. If the design of a system fails to recognize this fundamental difference, it is bound to suffer from serious performance problems especially if it relies heavily on remote communication. A thorough understanding of the application being designed is required to decide which objects should be kept together and which can be placed remotely.

If the goal is to unify the difference in latency, then we’ve two options:

  • Rely on the hardware to get faster with time in order to eliminate the difference in effeciency
  • Develop tools which allow us to visualize communication patterns between different objects and move them around as required. Since location is an implementation detail, this shouldn’t be too hard to achieve.


Another difference that’s very relevant to the design of distributed systems is the pattern of memory access between local and remote objects. A pointer in the local address space isn’t valid in a remote address space. We’re left with two choices:

  • The developer must be made aware of the difference between the access patterns
  • The system handles all memory access

To unify the differences in access between local & remote access, we need to let the system handle all aspects of access to memory. There are several way to do that:

  • Distributed shared memory
  • Using the OOP paradigm, compose a system entirely of objects, i.e., dealing only with object references. The transfer of data between address spaces can be dealt with by marshalling & unmarshalling the data by the layer underneath. This approach, however, makes the use of address-space-relative pointers obsolete.
The danger lies in promoting the myth that “remote access and local access are exactly the same” and not enforcing the myth. An underlying mechanism that does not unify all memory accesses while still promoting this myth is both misleading and prone to error.

Hence, it’s important for programmers to be made aware of the various differences between accessing local & remote objects so that they don’t get bitten by not knowing what’s happening under the covers.

Partial failure & concurrency

Partial failure is a central reality of distributed computing.

The paper argues that while both local & distributed systems are subject to failure, it’s a lot harder to discover what went wrong in case of distributed systems. For a local systems, either everything is shut down or there is some central authority which can detect what went wrong (the OS, for example).

However, in the case of a distributed system, determining what has failed is extremely difficult since there is no global state or resource manager available to keep track of everything happening in and across the system, hence there is no way to inform other components which may be functioning correctly which ones have failed. Components in a distributed system fail independently.

A central problem in distributed computing is insuring that the state of the whole system is consistent after such a failure; this is a problem that simply does not occur in local computing.

In order for a system to withstand partial failure, it’s important that it deals with indeterminacy and the objects react to it in a consistent manner. The interfaces must be able to state the cause of failure if possible and allow the reconstruction of a “reasonable state” in case the cause can’t be determined.

The question is not “can you make remote method invocation look like local method invocation?” but rather “what is the price of making remote method invocation identical to local method invocation?”

Two approaches come to mind:

  1. Treat all interfaces & objects as local. The problem with this approach is that it doesn’t take into account the failure models associated with distributed systems and hence it’s indeterministic by nature.
  2. Treat all interfaces & objects as remote. The flaw with this approach is that over-complicates local computing. It adds on a ton of work for objects that are never accessed remotely.
A better approach is to accept that there are irreconcilable differences between local and distributed computing, and to be conscious of those differences at all stages of the design and implementation of distributed applications.

A Note on Distributed Systems was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-09-25 10:52:45



“Nava”  means nine and “Ratri” means night, hence Navaratri is the festival celebrated over nine nights. The Navaratri has begun and I thought of making my debut blog post an auspicious one by writing about it. 🙂

Navaratri for me during my schools was always associated with the happiness of getting a ten-day break from school. But let me now throw some light into the beautiful aspects of this festival. It is celebrated in different parts of our country for different reasons, but the essence is same and powerful in all – victory of good over evil. It is believed that Goddess Durga battles with the asura(demon) Mahishasura and emerges victorious to restore Dharma. It is also celebrated as a victory of Lord Rama over the demon king Ravana.

In the first three nights, Goddess Durga is worshipped, in the next three Goddess Lakshmi is worshipped and in the last three days, Goddess Durga is worshipped. The beauty of this festival is the inner meaning of celebrating it. Goddess Durga symbolizes the Shakti which destroys the negative tendencies within us. She is called Mahishasura Mardini – the destroyer of Mahishasura, the asura, where Mahishasura represents the negative tendencies in us, like laziness, ignorance, ego etc.

Bommakolu at Amritapuri

But just by destroying the negativities in us, we are not immune to it, hence to purify our mind and to  instill positive qualities we worship Goddess Lakshmi- the giver of wealth and prosperity, where the wealth is not the material wealth that we usually associate it with but is the spiritual values that we inculcate.    

In the last three days, we worship Goddess Saraswati- the Goddess of knowledge. After eliminating negative tendencies and instilling positive values in us, we must attain the knowledge of the self which is the highest knowledge. The ninth day is the day of Ayudha Puja where we worship the tools that we use in our daily life. For students, it’s the books they use for studying and that’s why it was my favorite day of the whole year. As it is the only time nobody will persuade you to study (:P).  On the tenth day, we do Saraswati Puja and Vidhyarambham to get the blessings of Goddess Saraswati.

Goddess Durga, Lakshmi, and Saraswati are not different but the different aspects of divinity. Navarathri has already started and let’s not miss out on understanding the spiritual values that the festival symbolizes. We should eliminate the negative tendencies and inculcate positive values in us, only then will there be a true victory of good over evil!

Devi Krishnan | Pensieve | 2017-09-23 15:12:07

Hi again!

The previous post on this topic offered an overall summary of what I’ve been learning in my conversations with folks. Now I’d like to go into a little more detail on some of the topics.

So what should I learn?

Identifying the best areas to focus is probably one of the hardest tasks, especially for those folks who are not able to afford to get a degree or do a bootcamp like General Assembly. The guidance offered through official programs is not to be underestimated!

What do you already know?

You almost certainly have experience in _something_ that falls into UX design. Whether it’s researching how to do something, drawing things in your spare time, talking to someone new, explaining a skill or idea to someone else, or trying to use a new piece of software: these are all applicable to UX in some way or another.

The way I like to think about UX research and interactive design breaks down like this (see my quick and dirty handout from a recent talk I did):

Everything informs everything else, from the information you gather at the beginning, to the analysis with other folks, to the early sketchy design possibilities you create, through to iterating on your design based on feedback you get from stakeholders and users.

When these designs need to be produced in higher and higher fidelity as your team gets closer to something that works well for the stakeholders, there will likely be continued iterations based on what’s actually feasible and plausible. (I am not as experienced in the visual design aspect of the UX process, so I cannot offer as much structure around that part.)

What do you like to do, what do you need to learn?

Figure out what you know how to do or could easily learn. With that information, you can focus on what you know how to do and how to integrate it into a project, and then on improving any areas you specifically want to learn.

I personally need more practice in visual design and data visualization: I’m not especially familiar with visual design or otherwise making things visually approachable, and these both seem useful to at least have a basis in.

I’m working on identifying the best ways for me to improve these skills, and found that working on badges with Fedora folks helped a bit. Among other things, it meant that I had the opportunity to ask what people did when they did specific things that I might otherwise not have encountered (such as specific keystrokes in design programs).

For other folks, it might be wise to learn the basics of HTML and CSS. Even if you do not wish to write the code for your designs, it is immensely helpful to understand how programming works.

Depending on one’s level of familiarity with these, something like might be your best bet. These are free courses that let you see what you are doing as you go along. You might also appreciate, which will update with your changes as you go along, and which supports HTML, CSS, and Javascript.

If you’re not familiar with how to phrase things, maybe you want to work on writing content for your designs. Maybe pretend that you are talking to someone who has never run into the thing you are talking about, or to someone who is too busy to give you more than a 30 seconds to a minute to read whatever you have to say. Figure out the most concise, but clear, way to say whatever you need to say. Even if you don’t want to write the content for your designs, it’s really important to be able to express yourself simply and clearly. Words are important, along with visuals and structure.

If you are looking to get into research, it would behoove you to learn some about quantitative research, not just qualitative. One of the major points that folks looking for quantitative researchers want is the ability to tell if the company is measuring success effectively.

Possible places to get cheap but decent classes include Lynda and Coursera. I’ve done some Coursera courses, specifically “Human-Centered Design: An Introduction”, ”Design Principles: An Introduction”, and “Information Design”.

Whatever it is that you need to learn more about, there is probably a way to do it online (remember to check Youtube!). However, it is often the things one needs the most help in that are the hardest to figure out how to learn on one’s own. Knowing the terminology is important for any successful google search!

(Note: I suspect that offering classes in basic aspects of each piece of the UX process would be a good value for the UXPA boston group, given the content of the previous paragraph. Not everyone learns from videos/written instruction very well)

Do a project. Any project

In my experience, the best way to learn is to find a specific design project — really any design project is fine to start out — and start working on it. If you have friends who write programs, see if they want your help. If you have friends with lots and lots of ideas, ask them to let you help design one of them. If neither of these are the case, consider an area in which you wish that something existed, or in which you wish a piece of software were easier to use. At this point, it matters less if your project goes live — although that’s always preferred if possible — and more that you are working on something.

Take lots of screenshots and notes and keep track of what you’ve tried, what worked, and what didn’t work. These will be useful when it comes time to create your portfolio!

Remember: the point of your first project is to learn, rather than to succeed, and most people learn the best from failure. Failing at something isn’t actually bad. Indeed, it’s almost expected, since you’re new at it. Figuring out where things went wrong is the important part.

That said, it can be difficult to know what to do at any stage of a project, especially if you’ve never tackled one before. This is where having someone you can check in with is invaluable. Not only is UX design not really a solitary activity, but having someone to help nudge you on the right path when you get stuck is fantastic.

If you have a mentor, that’s great. If not, see if you can find other folks who are also job hunting to work with. Chances are good that you are each better at different pieces of the project, and this will provide you both with additional experience.

For a possible mentors, join (credit to David Simpson for this!) and get in touch with someone who looks useful for your needs.

If you’re still struggling to figure out a design idea, this page might be helpful.

If you’re not sure how to approach a project, this site talks about the whiteboard design challenge that sometimes happens in interviews, and is a decent overview of what a design project could involve.

(Note: Offering folks ways to get in touch with others who are looking for their design projects to work on might be a useful feature. Similarly, ways to find mentors.)

Which tools?

In general, you will need to use a tool of some sort for your design project. Paper prototypes are amazing, no doubt about it. Unfortunately, they are difficult to test out remotely, and rely on excellent drawing skills and handwriting to be easily used for prototypes.

There are a large number of options for tools in the UX design space.


Some are focused on being easy to use to make low and medium-fidelity mockups and prototypes (Balsamiq was my first tool, for example. Axure is easy to start out, but a bit complicated to learn to turn into a prototype). Some are specifically meant to help folks turn their designs into prototypes (like Invision, which is free and supports uploading existing designs) and often support collaboration quite easily. Others are more on the visual design side of things, although sometimes still include fairly easy ways to make mockups and prototypes (Sketch is extremely popular, but mac-only).

Adobe’s creative cloud service includes a lot of commonly used graphic design tools, whether photoshop (for which Gimp is a decent free and open source substitute, if poorly named), illustrator (vector graphics; try Inkscape for a free and open source substitute), indesign (as far as I can tell it’s about design for publishing online and off? Not sure of the best free equivalent) or the recently added experience design (XD beta, again not sure of an equivalent, although I think it may be meant to compete with Sketch).

The ones I’ve listed above are the most frequently mentioned in job applications, especially Sketch and Adobe creative cloud. Axure and Invision are also quite common. There are a _lot_ of other newer (and often free/beta) options, although I’ve not done much exploring of those.

(note: classes/mentors for basic introductions to the most common design tools might be useful, especially for those who are not already familiar with Adobe Creative Cloud. Not everyone learns from videos/written instruction well)

Other tools and techniques

You may also want to investigate tools for mind mapping (I like MindMeister, free for a small number of maps), which can be useful to keep track of relevant ideas and concepts. Or for remote affinity mapping (I like Realtimeboard, free for a small number of boards) and other sticky-note/whiteboard-based activities.

There are a lot of other techniques that could be good to learn, including task flows and journey maps.

Many companies want folks with experience in the agile framework, so learning what that is and the various ways that design folk have figured out how to integrate into it would be useful.

If you are not already familiar with style guides and pattern libraries, getting a basic understanding of those would be useful.

Ok, I’ve done my first design. Now what?

First, congratulations! That’s often the hardest part.

Review your work

Take a look at what you did with an eye toward improving. What do you want to learn more about? What do you need help with? Where do you feel you excelled?


Take a look at various blogs in UX, as now that you’ve done your first project, you will likely start finding that those start making more sense to you. I found that reading various blogs and watching videos was overwhelming before I’d done a project, because I had no idea what was relevant.

Twitter has a lot of fantastic UX folks, although who you want to follow may be partly location-based. I like Jared Spool, Joe Natoli, Luke Wroblewski, Mule Design Studio, Dana Chisnell, Sarah Mei, and What Users Do. is an excellent overview site that I really need to revisit myself, now that I’ve got some experience in UX.

I’m also fond of UX Mastery, and the Nielsen Norman Group.

There’s also a lot of good books out there!

(note: a curated list of useful links and books would be really helpful!)


Your best bet would be to summarize what you did, whether as part of your portfolio or as preparation for your portfolio. Keep your eye out for things you would have done differently next time, as well as things you think worked out well. You want to describe your process, and at the same time tell a story about what you did and why. Remember to be clear on what you did and what your teammates did: as I’ve mentioned above, UX is typically a team process.

If you want to write the HTML and CSS yourself, that’s fine. However, beware of the problem of running down rat holes to make things look perfect, and never actually creating a portfolio that you can share. That’s a major reason I’m moving away from a static website to — it’s so much easier to do good design if I’m not also trying to write the code.

Tell a story?

I’ve had lots and lots of people say to tell a story, so I’ll share something about that. I had no idea what that actually _meant_ until I had a chance to a) dig deeper into what specifically folks were thinking about and b) see examples of this. One of my major problems is that writing a portfolio for a UX researcher is _hard_. You tend to have fewer pretty things to show folks than the typical graphic design portfolio might, and you may or may not have the design skills to make your portfolio pretty.

To the best of my understanding, your story needs to include as much guidance for your reader as possible. Like everything else, use your nacient UX skills on your portfolio: guide your reader through it.

Guide your reader

Use Gestalt principles to help your reader know where to go next, and I recommend an overview (this links to my in-progress update for my website) of your major goals and results to act as guideposts.

From this page: Include as much as possible of the STAR method in your portfolio to communicate what the situation is (goal of the project), what tasks and actions you accomplished (your UX toolkit of wireframing, usability testing, sitemaps…) and what the end results were (analytics, final designs, customer testimonials).

Note that I’m still struggling with the best way to explain the end results in some of my projects, because they either were one shot things (through hackathons) or are on pause while underlying things are completed.

I’ve got a portfolio, now what?

Get someone to look at it! Just as in everything else, you want someone else to take a look because there will be something you’ve missed, or ways in which you are not as clear as you’d like.

If that’s not an option, take a week or two, and then take another look at it. You’ll probably find typos and brainos (places where what you wrote doesn’t actually make sense), even though you are the one who originally wrote it.

(note: I expect that offering folks portfolio feedback would be really helpful! I’ve personally gotten in touch with someone from and have a review pending)

Do more design work!

Find more projects to work on. Now that you have your first one under your belt, this will go more smoothly, and you likely will find it easier to identify areas to work on.

If you happen to be able to find an internship in UX (say, Outreachy), take it! Guidance is amazing.

Start looking for jobs

This will help you get an idea of what the market looks like right now. It may help you decide what tools or skills to learn, or identify things you specifically _don’t_ want to do. And hey, you might find a job that looks good!


Honestly, I should have already said this, but this is easier when you have a little experience. At least in my case, having some basic knowledge makes it easier to talk to folks about UX.

Better yet is if you have a specific goal in talking to folks. For example, since I’ve been collecting data about the hiring process in Boston, I’ve had no trouble contacting folks about interviewing them. You may be able to take the tactic of asking folks about what they do in UX, potentially allowing for the opportunity to learn more about UX at their company.

Business (MBA) folk do something called an informational interview. In some cases, this appears to mean talking to folks about UX at their company. In others, it might involve the possibility of going to someone’s company and actually seeing how it works. As far as I can tell, your best bet is to see if you know anyone working at a company that includes UX folks and see if you can get any of them to introduce you. You can also message people on LinkedIn without a connection, but that may not work as well.

Present on your project

If you have the opportunity to present on a project you’ve done, take it. Presenting skills are very important in UX, and practice does help. Talking in front of a group of people can be scary, especially if you’re also trying to get them to hire you. Practice in a safer space, first, if you can.

Be visible online

If you don’t already exist online, you really should. Start a blog (I’m quite fond of Medium) about your UX experiences/learning/thoughts. Be active on twitter. Be visible in your UXness.

What next?

I’ll be chatting with more folks over the coming weeks, and will be speaking to the UXPA Boston board the first week of October. Watch this space!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-09-22 21:55:10

Another successful Randa meeting! I spent most of my days working on snappy packaging for KDE core applications, and I have most of them done!

Snappy Builds on KDE Neon

We need testers! Please see Using snappy to get started.

In the evenings I worked on getting all my appimage work moved into the KDE infrastructure so that the community can take over.

I learned a great deal about accessibility and have been formulating ways to improve KDE neon in this area.

Randa meetings are crucial to the KDE community for developer interaction, brainstorming, and bringing great new things to KDE.
I encourage all of you to please consider a donation at

Scarlett Clark | Home of Scarlett Gately Clark | 2017-09-21 12:54:59

One of the most recent projects that FSFE is undertaking is to help the community define itself and for everyone to come on a common understanding of what are the values and what are it’s mission and goals.

Since its founding in 2001, individuals engaged in the FSFE, from coordinators and volunteers to full-time employees, have come to develop their understanding of what FSFE is and what our shared values are. Sometimes the views expressed by individuals have been similar to one another, sometimes they have diverged. That’s all fine, since people’s thoughts and perceptions are constantly changing.

A strategy review in 2014 identified at least seven different ways that people describe and communicate what FSFE is. This phenomenon is a result of the process of evolution and is fine. Moving forward, we see a need for us to come together under a common identity; a shared understanding of what FSFE is, what’s important in our work, what values we have and share, and how this shapes FSFE for the future.

With the FSFE in 2020 project, we are looking to define and renew the identity of FSFE, a process similar to one we undertook when founding the organisation.

Now, 15 years later, we will be able to reflect on our work over the years and come to an understanding of what aspects of our identity have remained genuine and guided the organisation through these 15 years.

Our primary focus will be on taking interviews from volunteers, coordinators, employees and others who feel that they have a connection with FSFE. We will also talk to people outside of the FSFE who have come in contact with our work and will be able to reflect not only on how we perceive the FSFE inside of it, but how we’ve managed to convey our identity to the public over the years.

We will ask individuals to participate in interviews, sometimes directly, sometimes by asking for people who are interested in participating. We’ll conduct surveys circulated through our community and newsletter, and we will share our findings regularly, on our web pages, and on our team wiki (

Click here if you would like to take part in the survey.

We’re starting on an exciting adventure, and we hope you will join us on our way forward!

Kristi Progri | Kristi Progri | 2017-09-20 08:27:05

I’ve been slowly moving my website from the official pelican-based version to an in-progress Wix-based version. I learned interesting things around current web development while using the Pelican version, but I found it difficult to implement the kinds of design choices I wanted to make. I also found it quite difficult to get a responsive design that _stayed_ responsive when I made changes to the CSS file.

Wix is very nice for many design decisions, in large part because one can take a particular design element and put it wherever you want on the page. There is no futzing with the HTML or CSS, and no need to learn Python or Javascript.

Given that I want my page to be welcoming and easy to follow, easily choosing specific design elements is vital.

Tell a story!

One of the most important aspects of a UX portfolio is demonstrating one’s UX skills. This means walking folks through your process and making it easy to follow and understand.

One of my major challenges was (and is!) deciding how to structure my portfolio to offer the greatest ease of use without losing too many of the specific details. Upon the recommendation of one of the many recruiters I’ve spoken with, I’ve been adding an overview page to each piece of my projects:

In this version, I offer an overview and links to more details of some of the pieces.

If you compare this to the currently official version of my page, it’s a clear and huge difference in usability:

This doesn’t show the overall goal, what my role was, or offer much guidance. It’s also not physically structured for easy reading.

How to tell my story?

One of my major struggles is with offering too much information. Too many details, and too little structure.

I want people to know what I did! Unfortunately, if there’s not enough structure, they won’t read any of it. If there’s too much information, they won’ read any of it. So my major task is to take what I have and create overviews; not just for the main page of a project, but for sub pages.

This is unfortunately not quick or easy! As a result, I’m working on bringing the overviews back to my pelican site as I make them, with the eventual goal of fully transitioning. Sadly, I have been unable to convince my pelican site to let me stack things horizontally. My impression is that this is one of the major improvements to my Wix site, so even though I’ll bring some of the ideas back to Pelican, they are simply not as well-designed there.

I’ll be asking for feedback before I move over completely, of course. In the meantime, it’s pretty clear to me that my Wix site is just _better_. I’ll also be grabbing what I have at the Pelican site before ditching it, as I will worry that I’d lose information otherwise.

Other changes?

I’m also ditching the references to Cambio Buddies. It was a valuable and useful project, but I had very little guidance for what I was doing. I made a lot of mistakes, used techniques poorly, and am generally not happy with that project. Maybe it’s a mistake to remove my first project, but I just don’t like it.

Some folks have suggested incorporating the ‘current’ and ‘complete’ design projects into a single area. I’m reluctant to do this, since the current projects are still in process: I don’t want to be presenting them as if they were finished when they are not.

Similarly, folks have suggested getting rid of the design artifacts page. I’m not completely clear on why: they are in the Projects area, and it seems helpful to let folks get to a specific artifact quickly if they so desire.

One of my early bits of feedback for the Pelican portfolio was a lack of process. I’m still not entirely clear on what was meant by that, although I suspect that the lack of overview may have been part of it.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-09-16 18:30:52

Hi folks!

I’ve been meeting, finding, and interviewing folks at various points in their UX careers. It’s been fascinating, and reminds me that I’m _much_ better at networking when I have a reason to talk to people.

I’ve not yet had a chance to analyze my interviews in depth thus far, but I have noticed some interesting trends.


Portfolios and online presence

  1. It’s difficult to know what to put in a UX portfolio, especially for researchers. Lots of folks talk about having a story for your reader to more easily understand and follow what you’ve done. I’m collecting information on what this could mean in practice.
  2. It’s really helpful to have an online presence that shows how you think about design, whether a blog, twitter, behance, dribble, or github. Some companies won’t consider someone without an online presence demonstrating their thought processes and personality. Put links to your online UX presence in your resume.

Finding your first job

  1. There’s not a lot of companies hiring folks who are new, and there seems to be a bit of a lull right now even among those who typically would be doing so. There’s a much better chance to get a job if you have at least 2–3 years of experience.
  2. Most internships require that one is currently or recently in school. It’s also difficult to find mentors or apprenticeships.
  3. Folks doing the hiring may or may not understand what UX is, what each UX role involves, or what the best things to look for are. Job descriptions may or may not involve throwing everything they might want in there, so it’s often worth applying even if you don’t know all of what they are asking for.
  4. Lots of companies are playing catchup — they feel like they should have gotten into UX 10 years ago, so think they need senior UXers to get things jumpstarted. Those senior UXers are typically under-resourced and rarely have time or space to take on juniors and help get them the experience they need. Unfortunately, without higher ups understanding and believing in UX, even hiring seniors often results in failure of the UX team.
  5. Very few folks I talked to have specific tools they prefer folks to know how to use, except in cases where getting permission to use specific tools is complicated. This is especially relevant given the sheer number of tools out there, whether for wireframing, prototyping, or creating high-fidelity visual designs.

Keep learning

  1. It’s hard to figure out what online resources and books are the most useful to read or follow.
  2. It’s important to keep toward learning more about UX — even for folks who have a UX job. The field is constantly evolving.

Getting experience and taking criticism

  1. It’s difficult to get experience before you have a job in UX. This may be worse for researchers, as visual designers have an easier time selling their skills (but ‘looking pretty’ may not actually translate to ‘useful’).
  2. Even if you’re not great at sketching by hand, it’s really important to be able to jot your down ideas on paper visually. This offers a way to communicate your thoughts, and is quick and easy enough that you’re less likely to be attached to the ideas you’ve come up with. In turn, the sketchiness and reduced attachment makes criticism easier to take.
  3. Work with other folks on your designs. Practice giving and taking criticism, because no one gets it right on the first try. Design is a process for a reason, and there’s a lot of different pieces to it.

Possible solutions?

Getting experience

This is a significant problem. Given that few places are hiring folks without a couple of years of experience, newbies and career changers need to find ways to get that experience.

For those who can afford it and have access, in-person UX programs like Bentley’s master’s in human factors program and Jarod Spool’s Center Centre are an excellent choice. These offer curated and guided information, connections, and practice at design. Unfortunately, these and other programs rely on proximity and available time and money, and are not inexpensive (although Center centre tries to mitigate that part).

There are also online courses which can be helpful, and bootcamps both on and offline, but these again cost money and may or may not offer built-in networking.

So how does one find work, even if unpaid? There’s a few options that I am aware of:

  1. If you can make Code for Boston’s weekly meetings, that’s a good option. They tend to have ideas for what to work on, and specifically mention both developers and designers.
  2. You can find other folks looking for UX work, and see if they want to team up with you on something. This is especially useful if you each have different skills: like a researcher and a visual designer, or a UX person and a developer. This does require being able to find those folks, and is one possible option for how my project can offer help. These designs are less likely to go live, but any projects are better than no projects.
  3. You might be able to find non-profits who need help, although this does require a) that the non-profit is able to understand the value of what you can offer them, b) you know the right people to talk to, and c) that they have someone able to implement the suggestions you make. Attending a Give Camp may help with those problems, but the New England page appears to not be functional (the website for it goes to a godaddy page). This may be another thing I can offer help with through UXPA.
  4. Outreachy might be another option. This is a program to help women and minorities get into open source software, and is not specifically focused on UX. However, I was able to do a UX research and interaction design project with the Fedora Project through outreachy, and it was fabulously helpful and interesting.
  5. You may be able to find an open source project to help out with, such as Red Hat’s Patternfly Design Library (also on github).
  6. Do you know any developers working on projects in their spare time? See if you can help them out.
  7. If you are currently in school, or just recently left, look for design internships. These are easier to get if you have some design experience, perhaps through your classwork.

Options 2 and 6 may be more difficult for designers just starting out, as they are much easier to do if one has some guidance for how to approach design problems.

Finding a mentor

Mentorship is really important, especially if you cannot afford to attend school and get guidance that way. Unfortunately, it can be difficult to find a mentor, and precisely what a mentor can offer or do for you varies by the mentor.

Ideally, I think that mentors should offer:

  1. Guidance around how to start or continue your UX learning process.
  2. Suggestions for how to improve the things that you’ve identified as weaknesses in your skillset. Alternately, ways to identify those weaknesses.
  3. Portfolio and resume reviews.

Beyond this, it’d be lovely if mentors could offer networking help (eg: connections to open positions and folks who may eventually have open positions), and suggestions for projects to work on.

The XX UX community offers a mentorship matching program in some cities, although Boston is not yet one of them. This may be another opportunity for my project to help folks out, whether by working with XX UX (which would mean it’s only available to women), or by building on their example and making our own program.

Curated resources

Given how much information there is out there, a possible way to help folks out would be to offer resources that experienced UX folks agree would be useful to those who are starting out.

These resources could include basic guidance for portfolios for various design specialities, design interviews (including design exercises), and job applications, as well as structure within which to learn design processes.

Also relevant might be instruction on persuasion, on communicating and working within cross-functional teams, and on presentation skills (both creating a presentation and presenting it).

We might want to include specific information such as the use of short-cut keys within design programs (Ctrl d, alt Ctrl shift arrow keys for movement, etc), recommendations for tools to start out with and an introduction to their use, and suggestions for how to use those tools to more easily share and maintain one’s designs (since all good design involves many different folks in various different teams).

Finally, we could offer recommendations for good books for folks in various stages of learning.

Keep learning!

One of the most important things for someone new to a field is to keep learning. Be visibly interested in and passionate about your field: it’ll communicate itself to those you are working with, and will help keep you informed and aware of what’s going on.

At the same time, don’t believe everything you read — some folks make things look more clear-cut and simple than actually happens. Reality is messy!

Don’t be afraid to try things out. No one in UX knows everything about it, and mistakes are how to learn.

Remember to mention others who had a role in any design you talk about: design isn’t typically an individual process (collaboration is important!), and hiring managers want to know that you understand and can talk about your role in the project.

If you’re interested in research, learn both qualitative and quantitative methods. Most of your work will probably be in qualitative spaces, but it’s useful to be able to measure success (are we accomplishing our goals?). It’s also helpful to understand basic data visualization techniques.

Remember to take pictures at all stages of your process! This will be hugely helpful when it comes time to make your portfolio.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-09-16 18:03:57

Hello Suraj, you could check out all the details on their GitHub page under the Documentation section. Link:

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-09-16 10:20:32

The rest of the trip was uneventful, at least as far as bits were concerned. (Do not get me started on transportation. Or pest infestations either.)

My new SIMs worked as expected, with a recent legal change I can use the ones I bought in Italy to roam at no extra charge in other EU countries. This was super handy for the five hour layover in Germany. It also means (I think) I can keep them active. I go to Europe from time to time, but not Italy every year as would be needed to refresh my phone service and keep the number. (Important note: Italy requires a federal tax ID number to buy a SIM. I have one, but most tourists don’t.)

The downloadable encrypted disk worked, and once I got it I was able to access the keys for my server. I was still cut off from everything tied to my US phone, but it was tolerable. Web email only, and I had to send messages from the mail application because of my setup. So that data was still local. But there were only a few things I really had to reply to. Besides, I was supposed to be speaking Italian, not websurfing.

I have a separate laptop login, with restricted permissions, that I intended to use for general web browsing. I mostly used the phone, however. The one helpful thing was my mifi had more bandwidth than my phone, so I could connect to it by wifi and VoIP calls over the VPN were less terrible.

On the flight back home, I deleted the disk image, cleared data in all the browsers I had been using, and shut down. That logged me out of websites, with no way to get access without my US phone. Nothing actually happened at customs, but the point of practicing one’s security plans is so you are more confident they would work (and you can execute them) if actually needed. And, in my case, to write up what I thought about it.

The most unexpected surprise was the reminder that average people have no idea what two factor auth is. They were confused why I could not login to Facebook, when I had a perfectly good phone and laptop right there. I mean, everyone is on Facebook right? It was challenging to explain that I required a message that was sent to a device I didn’t have. (I think I was then deemed one of those “computer people.” Fair enough.)

The VPN set up for always-on worked about as well as it does in the US, so I’m happy with that. (Some websites still reject you tho, boo.) I tried to use public wifi in various locations (the mall, inside train stations, etc) but mostly they did not work correctly and I was stuck with whatever signal I could find on my own. (They were either over-used and not responsive, or blocked my VPN connection.)

Next time I’m going to get a plan for my phone that includes voice service. I couldn’t call taxis, and that was a pain. I was not in a big city where it’s easy to find a taxi.

Andrea Longo | Feorlen's Other Blog | 2017-09-15 13:39:55

Grants Available for Young People in Ontario

Hello all!

I have tweeted about this a bunch in the past, but to restate: I am here and willing to support young folks through the process of grant application. I am not an expert by any means, but have the skill set to help you go through program design, budgeting, and/or overall editing to ensure the best possible outcome. I am focusing my time to better support young women of colour, specifically, who are interested in applying for grants.

Currently, there are two big grants available for young people based in Ontario, Canada!

If you have a moment to read through the grants and find yourself (again, a young woman of colour) interested in applying for them, I’d be happy to assist. You can sign up to chat with me here. I’m down to discuss your potential project in person, too, if you’re Toronto-based. You can also connect with the program manager at each foundation, who are also great and enthusiastic resources in terms of support and clarity.

Good luck, and I hope to hear from you soon!

Inspirit Foundation

ChangeUp grants create opportunities for people aged 18–34 to initiate creative solutions-focused projects that address issues of discrimination and prejudice in their local communities. The $10,000 one-time grants are for projects that implement creative strategies to open minds and shift attitudes that perpetuate prejudice and exclusion based on ethnicity, race or religion. Through the grants, Inspirit Foundation is particularly interested in supporting the development of effective, connected and influential young change leaders. We’re accepting ChangeUp applications until September 25th, 2017.

Laidlaw Foundation

They just announced their new granting strategy!

Here is the current round of funding avalible tied to Scaling Impact.

Scaling Impact supports youth-led groups, youth-­driven organizations and their leaders with a proven program or idea, which has made a significant contribution to the inclusion and wellbeing of young people, to enhance and expand their impact and influence. More specifically, it invests in groups that:

  • have demonstrated relevant and effective approaches for reaching and meeting the needs of underserved young people;
  • are advocates that engage with and work to influence decision-makers;
  • have solid and appropriate partnerships in place; and
  • are recognized as an important and essential service/space/advocate by the young people they reach and other stakeholders.

Applying for a Scaling Impact grant is a two-step process that begins with submitting an expression of interest.

  • Expression of Interest Deadline: September 25, 2017.
  • Notification of whether a full application is being requested: By October 9th, 2017.
  • If there are any more grants avalible for young folks, please let me know and I can add it to the list.

Kindest Regards,


Nasma Ahmed | Stories by Nasma Ahmed on Medium | 2017-09-14 21:36:27

Sorry can’t I have to walk my Unicorn Wonder Woman shirt

Sorry can’t I have to walk my Unicorn shirt will do the talking for you. Search for your new favorite Funny shirt from many great designs. Shop now!I’m going to be mixing in Creepy Pasta with my Scotland Spam. This one is long, but definitly worth reading, especially if you’re into this stuff. Finnick and Ashton picked me up from each side to carry me to the punt medical wing in the train.…

View On WordPress

Julia Lima | Thu thm t uy tn | 2017-09-13 10:29:42

I don’t know where to begin, but I promised Zainab Bawa I would write-up about my experience in speaking at tech conferences and into the open source world. Sorry for the late post. Here is the speaking journey!

The ‘first’ proposal

My first tech talk proposal submission was for JsFoo in 2015. The talk was titled ‘JavaScript SDK for Perceptual Computing using Intel RealSense technology. The talk wasn’t accepted, I was disappointed, but there were some useful lessons learned. My submission was more focused on Intel’s technology and it appeared I was advertising for it. Conference submissions shouldn’t be focused on proprietary technologies or sales pitches, the audience will never like that. The conference reviewers were very kind to give me feedback and I accepted it with a positive note.

Screen Shot 2017-09-07 at 20.32.03

The ‘first’ talk

I worked on the feedback and I submitted my second talk proposal which was accepted and I gave my first tech talk at JSConfAsia Singapore in 2015.

Screen Shot 2017-09-08 at 11.14.11


The ‘first’ international conference

Prior to me submitting my first-ever talk proposal (for JSFoo), one fine morning I woke up to this fantastic email.

Screen Shot 2017-09-08 at 11.23.11

This email was the biggest game changer in my tech life. I had stumbled upon the diversity scholarship from JSConfEU and decided to take a bold step and apply. When I was applying, little did I know that I would bag the scholarship. My joy knew no bounds when I read this email. I can still recall that morning when I had the first dose of confidence++ 🙂

This was just the beginning…. Read More. Below are a couple of places where I have spoken till date:

  • JSConf Asia 2015, Singapore
  • Grace Hopper Conference 2015, India
  • JSConfBP 2016, Hungary
  • JSUnConf 2016, Germany
  • CSVConf 2016, Germany
  • Frontend Union Conf 2016, Lithuania
  • Web Summer Camp 2017, Croatia
  • MozFest 2017, London (Upcoming this year in October)
  • Women Who Code Bangalore
  • Women Who Code Berlin
  • Zalando Tech Meetups

PS: I have a long awaiting to-do to add all of the above links in one place in my website (I hope to do it soon ^_^)

In short, this has been my wonderful journey (and counting) into the speaker’s world.

Success is when, preparation meets opportunity

Words of wisdom

I didn’t achieve all of this over-night. I have constantly persevered and toiled in lot of blood and sweat. The best thing is, I never gave up! And the most important thing – ‘One doesn’t need to be an expert to speak.’

Screen Shot 2017-09-08 at 11.49.12
Link to the tweet

Why should you consider speaking at conferences?

Speak – we need diversity!

  • Speak – you will definitely learn a lot
  • Speak – you will teach others what you have learned
  • Speak – to get a confidence boost
  • Speak – you know you are awesome
  • Speak – the community needs new people and new ideas
  • Speak – you have nothing to lose!

Princiya Marina Sequeira | P's Blog | 2017-09-08 10:06:24

Coding the summer away!

Google Summer of Code 2017 is just about over and I am absolutely thrilled to have worked for Wikimedia Foundation under mentors Sage Ross and Jonathan Morgan consecutively for 2 internships now! (including Outreachy’16 December Internship). This post is dedicated towards providing a brief overview of my project.

Summing it all up!

Wiki Ed Experience:

Over the course of my Internship I got many experiences which includes getting in touch with Dashboard users from all across the world (Israel, Canada, USA, Egypt, Serbia, Czech Republic), interacting with them, organizing user testing sessions on my own and helping them with their frustrations and understanding their needs.

My Journey:

  • For the first part of my Internship, I did some code refining for the user contribution stats and successfully made them available on the user profile pages. Along with that, I solved some issues listed on my GSoC proposal.
  • For the second part of my Internship, I started conducting user testing sessions, solved some discovered issues right away and for others I opened issues on GitHub.
  • For the third part of my Internship, I started working on enhancing the usability of Dashboard on Mobile devices. The current navigation bar is over-crowded, so to make it accessible I replaced it with hamburger menu layout. Solved problems like the overlapping of course details columns and inflexibility in using the Dashboard features on small screen devices.
  • For the fourth and final part of my Internship comes the most adventurous part — attending Wikimania’17, Montreal. I finally met Sage and Jonathan (yaay!!) and spent memorable time with them. For the first time I got the opportunity to share my work with Wikimedians at the Wikimania Hackathon Showcase (more about the hackathon and Dashboard showcase at 47:45 ) with the help of Sage Ross. Also I had a short meeting with Jonathan, to take his help on creating a user testing report for properly documenting my work. He shared relevant links and also some of his work reports to give me a gist about how an ideal report needs to be. That was a fun meet as Jonathan also suggested some universities in USA regarding my plans of pursuing Masters in Human Computer Interaction .

GSoC Experience:

GSoC’17 with Wikimedia Foundation

Being a Google summer of code intern at Wikimedia Foundation introduces you many challenges, you have to be upfront, prompt and complete many tasks which includes creating your user page and keeping it updated with project details on mediawiki, submit weekly reports, keeping your blog posts updated, pre evaluation meetups and much more.

I would like to say thank you to the Wikimedia GSoC administrators (Sumit, Srishti Sethi and others) for making us take these efforts because now when GSoC is just about over, we don’t have to worry about documenting the work and spend extra time on that. It was necessary to do it when the details were fresh in our mind.

Experience with my mentors

Taking time out and organizing daily meet ups, helping us with doubts, suggestions to improve the quality of work, guidance with the steps that need to be followed, pair programming when needed, keeping all the interns in touch and overall keeping the work environment fun and interactive.

Thanks Sage Ross and Jonathan Morgan for taking me as a intern and sculpting a novice developer into a confident and disciplined wikimedian.

Interning with Wiki Ed Foundation

Working for Wiki Ed Foundation

I got to meet Jami, Lianna, Samantha, Frank along with Sage (more about them) during Wikimania this time. We went out for supper and I spent amazing time with them. For that day I got the feeling of being a part of the Wiki Education Foundation Family and I rejoiced those moments. I am really grateful to be a part of the movement, the direction in which Wiki Ed Foundation works and the steps it is taking towards its vision.

Next steps for me

I am planning to pursue masters in Human Computer Interaction for which I have an exam due next month (GRE). Also have plans to apply for Wikimedia project grant to continue my work on the Dashboard project. In the long run, I see myself playing a bigger role in this community and open source movement.

Tasks Completed

  1. Conducting user testing sessions — I had 6 successful user testing sessions over the course of my internship.
  2. Documenting the outcomes from the Testing Sessions.
  3. Solving most of the issues discovered in user testing sessions and add features to enhance the usability of the Dashboard — mentioned below:

Future Scope

  • Work on the Suggestions made by users to increase the dashboard usability.
  • Designing Dashboard layout for the Mobile Devices.

Quick Links

Link to Dashboard:
WikiEduDashboard Gitbub Repository
My GitHub Repository
GitHub Commits
Issues Reported 1, 2, 3.
Phabricator project task
Project Weekly Reports
Project Blog — 1 ,2 ,3 ,4 , 5, 6, 7, 8, 9, 10, 11, 12, 13.
GSoC’17 Wiki Page
User Testing Documentation (Work in progress)


Sage Ross, Jonathan Morgan.

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-09-06 03:16:55

The internship is coming to an end tomorrow, 6th September 2017. I have mixed feelings. It was a great experience and to know that it is coming to an end make me happy and down at the same time. I wouldn’t be officially an intern at WMF. But the best part of open source is, you can always be a contributor and keep developing, any time of the year.

What was my project?
There is a service named citoid that generates citation data and makes it easy to insert references in the Visual editor. It uses Zotero translators and my work was to document what these translators are, how can we create them and get them deployed.

Find my work
I have created two pages, one is the extension of the other.
Main page with all the information about development
Second page explaining how to create a translator for a blog (and similar sites)

Other microtasks that I did
Before you start documenting, you need to know what is it that you will be working on. My first task was to explore what actually these translators are and what power they hold. Once I got an idea about translators, I really enjoyed developing them. I submitted a few patches to the Zotero upstream during the internship period –
The Economic times
The Open library
The Globe and Mail
BBC Newsbeat
Oxford Reference
TV by the numbers
If you wish to have a translator for any site, let me know in comments, I’ll surely try to write it.


After preparing a few translators and playing with Scaffold before internship, I started to write documentation from the first day. For the initial draft, the documentation covered the how to write a translator using Scaffold. This included development environment, concepts, working example and code details. First major improvement that was done was to use CSS selectors and remove the concept of Xpaths to define HTML nodes. Around mid-sem I got a new task, to explore how to write a translator at server-side. This was the main motive of the whole project, to find out how we can write code and test it on server, but somewhere around this time, Zotero 5.0 was released. So the documentation was updated to cover any changes that might come in the procedure of development through Scaffold.

To figure out how tests can be carried out on server, was the one task that took the most of the time. I learned about containerization and worked with Docker. A new section came into existence – Developing and testing on server, working example of Mediawiki.

For the last task, I documented how we can write translators for blog-like sites, taking the example  of Wikimedia blog because it the most common use case that people will come across and for someone who wants to skip all the details and quickly learn through an example, it will be a good landing page. This page concentrates only on development on server and not Scaffold.

Outside the internship, I will be refining the documentation further, hopefully with inputs from my mentors and the community at WMF. What’s left to do is to translate all this work in Hindi as I proposed in my application. I will also be looking into other projects at WMF that I can contribute to. It was a great experience. Outreachy Rocks!!!

Looking forward to present my work on 21st September, 2017

Note: Applications for next round of Outreachy are opening soon. Stay informed.

Screenshot from 2017-09-06 04:57:26.png

Sonali Gupta | It's About Writing | 2017-09-05 17:30:22

Working for Wiki Education Foundation.


Sage Ross, Jonathan Morgan.

Quick Links

  • Outreachy Internship Round13(December 6, 2016 to March 6, 2017) — Link
  • Project Task: [P&E Dashboard] Provide cumulative statistics for all programs a user has participated in. — Link
  • Outreachy proposal for Statistical User Interface in WikiEduDashboard — Link
  • UX feedback on user dashboard wire frames by Jonathan Morgan— Link
  • Introduction Blog Post by Sage Ross — Link
  • Listed in Wikimedia Outreachy Past Projects — Link

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-09-05 07:26:25

A Conclusion to My Internship

When I began this internship three months ago, neither myself nor my mentor had a clear picture of what I would produce, or how the goals of the project would change as time progressed.

The high-level goal of the internship was clear: to automate web accessibility testing. How best to do this, however, was not.

The first few weeks, as well as the weeks prior, were mostly spent on research.

I first needed to understand the problem domain, and then understand the tools available, as well as how to use them.

I had done some python programming before, but my work on this project was a side of python I had not delved into.

I had never used pytest or selenium, and knew nothing about python packaging (or accessibility, for that matter).

Prior to the start of the internship, I took a Udacity course on web accessibility, and wrote some simple tests using selenium and python.


Most of my work on this project was spent writing the axe-selenium-python package, and figuring out how best to integrate accessibility tests into existing test suites.

Writing a python package was actually the stretch goal of my internship.

Completing an alpha-stage version took a lot less time than I anticipated.

Once I completed that, however, the number of questions and problems to solve quickly ballooned into a project that could span at least a year.

Given that I didn’t have a year (and actually, by that point, had six weeks left), I focused on only two or three goals.

I wanted to produce something that other people could use, and would want to use, and chose the tasks that would best facilitate that.

This came down to three primary goals:

  1. Enable users to easily include and customize accessibility rule tests.
  2. Implement reporting that gives usable feedback on accessibility violations.
  3. Provide sufficient documentation.

The Final Stretch

Unfortunately, I was not able to complete the above tasks as well as I would have liked.

Integrate Accessibility Tests

I wanted to allow users to easily customize and generate individual tests for each accessibility rule, and to allow marking individual tests as expected failures.

This has multiple benefits. Individual tests would allow a user to quickly see what accessibility rules are failing, and provide a concise report on that rule.

Without individual tests, there would a single test that would fail if any accessibility violations were found, and the details would be provided as a single report.

I did, however, enable users to customize the accessibility test by setting the context and options, an included feature of the axe-core API, but also to filter for impact level.

This means that users can check only for critical violations, or for violations severe and higher, and so forth, for each impact level that axe-core uses to classify violations.


While there are many improvements I would like to make to the reporting, I did produce a usable report feature.

It closely resembles the report feature in axe-selenium-java.

I have gold-plating goals for this feature as well. I would like to create a pytest plugin, similar to pytest-html, that creates a single HTML accessibility report for each test suite or job.


I did want to ensure that I didn’t complete this project without documentation, or without helpful documentation.

I know from experience that it’s very frustrating to find software that you want to use, but its documentation doesn’t provide enough information on how to use it.

There is a great deal that could be added, but I believe it is sufficient to get someone started using the package.

Goals for the Future

As I said before, the list of “nice-to-have” features grew very long very quickly over the past several weeks.

This project could easily continue for another year fulfilling all of these goals.

Here’s a basic list of what I envision for the future of this project, listed more or less from highest priority to lowest:

  • Implement Individual Rule Tests
  • Enable Expected Failures (pytest xfail)
  • Write an Accessibility Report Plugin for Pytest
  • Add an Accessibility Stage to the Jenkins Testing Pipeline
  • Enable the Auto-Filing of Bugs When Tests Fail
  • Create a Dashboard of Mozilla’s Web Assets with their Accessibility Rating
  • Increase Community Involvement in Accessibility Projects

I have been told that a lot of people are very excited about the work that I’ve been doing on this project.

It actually fills a need that several Mozilla employees have been wanting to accomplish for some time now.

It means a great deal to me to be doing meaningful work that makes a positive impact on people.

Final Presentation

To conclude my internship, I have decided to do I final presentation on this project.

This will be a high-level overview, and less technical than most of the blog posts I have written.

I will be presenting live to Mozillians over Vidyo, but the presentation will also be recorded, and available to the public on AirMozilla.

Currently, the presentation is scheduled for September 11 at 2:00 PM MST.

Employees of Mozilla will be able to view the presentation live in Matt Brandt’s Vidyo Room.

The presentation will be recorded and will be available to the public on AirMozilla.

More details will be provided as they are available.

The post Mozilla Internship: Conclusion and Retrospective appeared first on Kimberly the Geek.

Kimberly Pennington | Kimberly the Geek | 2017-09-04 15:08:50

The Open Innovation Team at Mozilla and Stanford have partnered to explore how a decentralized design process (a design process where people are not in the same physical location) can provide a way to innovate and include more diverse perspectives in the design process.

I participated in Mozilla’s and Stanford’s open design sprint for an accessible web.  In my team we were 5 people from different part of world. I got a chance to lead the team. My experience working with people was pretty amazing. Here in this blog I am listing all articles, videos, books and reference shared with us during the sprint.

Rakhi Sharma | aka_atbrakhi | 2017-09-04 09:10:10

I’ve been in Italy a few days now, with only travel-specific phone and laptop. Both are set up with a VPN and the laptop drive is encrypted. I’m using web versions of the services I need, with the exception of outgoing email. (My weird mail setup relies on self-hosted SMTP, that essentially forges my From: address.)

I decided to not logout of everything on the phone (my point of entry, Germany, is not known as a hotbed of traveller phone searching.) So no need to involve my spouse to relay authentication tokens from home.

I did have some trouble with iCloud two factor auth and had to resort to using a recovery key. Despite the appearance of auth tied to a device and not only a phone number, I couldn’t get the login token with my new SIM.

Once I got everything working it’s been ok, I just have to not flush browser data and lose my auth. Not having email on my phone is a minor nuisance, but I can live with that for two weeks.

I have no passwords saved locally, instead I made an encrypted disk image with passwords and other important things (like a photo of my passport and server ssh keys.) It’s on my web server, so I can download it from anywhere. For now I’m keeping the encrypted image locally and only opening it as needed. I’ll delete it before I get to the US.

The only real nuisance has been trying to minimize data cached on my phone from web browsing. I try to not open random links without a private window, but on a phone particularly it’s sometimes hard to tell what you are clicking on.

I have Twitter set up in Brave (a security-minded mobile browser), Safari configured without Javascript or cookies, and Dolphin for things that need both. This involves a lot of copying links between browsers, but it’s the same thing I do on the desktop.

I have a much better data plan for the phone than last time, so I’m actually doing most things there. I also have a new SIM for my mifi, although it’s unfortunately still locked to Vodafone. (Someone at Vodafone NZ suggested Italy could unlock it, but they won’t.)

Andrea Longo | Feorlen's Other Blog | 2017-09-03 19:18:27


Do you need exact dates, I can give you but I guess it doesn’t matter. We got down from our bus at a place which had flyovers , and did a cab to reach your college. It was nearly 3 AM. I was midst of people with whom I felt really uncomfortable and was trying to calm myself down by saying that it’s fine, it’s part of the experiences that I want to have. We entered through the college gate and I was lost in the college, it was alive even at that time of the night. There were lights and decorations and stuff, things that you see in videos of college fests and I was finally there, coming out of my monotonous life, and out of the shell that wasn’t making me happy anymore. “He’s there”, Garv told the driver, signaling him to slow down the car and pick you up. “Who is he?” I asked Sana. “A friend of Garv”, she told me. “One more person to make you feel a misfit. What is wrong with these people of wonderland? Why do all guys look cleaner and brighter than I would ever be.” That was my first thought about you.

Little did I know, that it was a beginning of two beautiful relationships with two beautiful souls. Urvi and you. We had a great time then, and dear friend, thanks for making all the efforts to make all of your guests feel at home. I don’t really remember what our first talk was like, or when did we start talking but for sure, it was smooth. A group of people who flip like binaries. I was not there to make friends, nope. I was just trying to run from humans and be closer to other aspects of life that I had stopped to understand. The good thing was, I brought a blank state of mind, without expectations, without rules and standards to define joy and you, luckily, filled that blankness with positivity. I will praise you, for being confident about yourself, for your choice to get out of your comfort zone, that too, effortlessly as it seemed, though it might have not been in practice. I will praise you for handling a large group and make people feel together though you were the only one who could have felt the most disconnected for being from some other place, but there were no insecurities and that is commendable.

We have changed a lot, it is almost an year now. Nevertheless, this letter is for the guy I met on that day. I love giving free advice and here is some for that guy who is so eager to make new friends and live the time of his life.

Pages are fluttering desperate to be turned,

Get lost on the road, And sleep on the floor,

Be sure about change, Be ready to break the chains,

Don’t hold your smile and dance a little wild,

There are streets and avenues waiting for your footsteps,

droplets, crystals of sand, shades of green,

Places void and packed and so many things you don’t know yet,

Give chance to life to surprise you, it has waited long to unfold what it holds.

Letters shouldn’t start with a hey and end with a bye

Sonali Gupta | It's About Writing | 2017-09-03 07:33:10

Well, here goes the official summary of the project I did with the Wikimedia foundation for my Outreachy internship, Round 14.

Project: Allow Programs & Events Dashboard to make automatic edits on connected wikis
WikiEdDashboard is a Ruby on Rails/React.js based web app that supports the organization and management of assignments, courses and events for groups of Wikipedia users who are working to contribute on a common Wikipedia project. The dashboard code runs two main projects: the Wiki Education Foundation Dashboard(for programs focused on higher education in the US and Canada) and the Wikimedia Programs & Events Dashboard(for the global Wikimedia community).
While the Wiki Education Foundation Dashboard already supported open authentication based automatic edits to the English Wikipedia and mirrors the course progress/changes in the respective wiki pages, the aim of my project was to introduce and establish these wiki editing features in the Programs & Events Dashboard.

Milestones achieved: I started with discussing and understanding the proper scope of the project with my mentors. It was followed by around two weeks of intensive code investigation, discussions and research.
I started making my code changes the following week. It was followed by several code review cycles, interspersed with local testing at my end. We merged the changes soon after that. We zeroed down the Portuguese Wikipedia project to test our functionality and got in touch with a volunteer to help us during the testing period, create Portuguese language templates and conduct discussions with members of the community.
We have successfully deployed the project and it is now live. The Portuguese Wikipedia project was the first to come on board and some editathons have already been conducted successfully.
I also created documentation required by members to enable automatic edits for their project and are the process of enabling it for Czech Wikipedia has now begun. Hoping for more and more projects to make good use of the feature.

Next steps: Having successfully delivered my project as per the decided problem statement, I am looking at other aspects of the project now. I have learnt a lot on this project and wish to keep contributing to it. I am now looking at technologies like React, that I am less comfortable working with as opposed to Rails. I also wish to keep providing support for any help that is required of me with regard to the automatic edits feature.

It has been a great opportunity and privilege to be working with mentors like Sage and Jonathan, and I wish the learning continues :)

Medha Bansal | Stories by Medha Bansal on Medium | 2017-09-02 18:53:05

Hi folks!

I returned from Flock 2017, which was in Cape Cod, MA this year. This was my first Flock, largely as a result of my Outreachy project. I will be claiming my Outreachy travel fund for it, and likely would not have gone were it not for that fund. I’m still job hunting, and all!

There is no substitute for in-person interaction!

Fedora folks are very friendly. :) OK, at least the ones at Flock were! Being there reminded me of the aspect of my Regional Hubs design wherein a major goal was to encourage in-person interactions because they’d increase the chances of people remaining engaged with the community. I’d say Flock definitely confirms this goal, given the number of ways in which people — including myself — were becoming more involved simply through chatting with others in the community.

I’d say one of the major effects of my time at flock was making people I’ve spoken to only through IRC and email more real. I do find in-person interaction so much better! Of course, I also really like people and being around people, and lack of a job means I am not really getting a lot of that (I’m one of those weird introvert/extrovert combination people).

I met a lot of people, although being partly faceblind and poor at remembering names made (and makes) that more complex. Hopefully people remember me, even if I can’t remember who they are!

Presenting my work

My best approximation of a UX design process.

I gave a presentation on my UX work for Regional Fedora Hubs, with the goal of helping others be able to do basic UX work on their own development projects. I did most of the talking, and Mo was helpful for moral support and answering Hubs questions I didn’t know the answer to. The presentation and handout are available on my website, and the video will also be available there once I have access to it. I’m not certain how likely others are to use that information, however. I did find that people asked relevant questions at the end, suggesting that not only did they follow my talk, but they were interested enough in it to be curious about things I hadn’t mentioned.

I appreciated the chance to present to a new audience, and to gain additional practice in presenting and preparation thereof.

What did I learn? What did I do for the first time?

I also appreciated the opportunity to attend various sessions on UX, both to see how other people do certain tasks and to add additional tools to my toolbox (microtesting, “SUS” or System Usability Scale, and focus group-based usability testing all come to mind).

I enjoyed learning about various aspects of Fedora that I hadn’t yet encountered, whether specific technical concepts like modularity and project atomic, or being able to attend and contribute to the diversity team’s session. I also liked being able to get caught up on what’s been doing on with Fedora Hubs since Outreachy. I’ve been out of the loop there because jury duty for 3 months and various temp jobs make it difficult to attend the weekly meeting.

I created my first badge, albeit through a very simple change to an existing one. Additional experience with Inkscape with others around to guide me is always appreciated, given that I’d like more experience with basic visual design!

Now what?

I am not sure yet how I might best continue to offer my skills to the Fedora and open source community, but this conference has definitely confirmed my interest in doing so. Thank you Flock, Outreachy, and the organizers and participants thereof!

We’ll see if I can figure out some way to make the next Flock. :)

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-09-02 18:30:03

A lot happened after my final changes were merged. First of all there were some ad-hoc changes that we had to make after the deployment for the bugs/errors caught on Sentry.

We got in touch with Portuguese Wikipedia volunteer Joao, who was kind enough to create the Portuguese templates for our use. We activated the project for Portuguese Wikipedia and he assured us he would soon be conducting editathons to be able to test the project live.

I also focused on working on a few React issues. This time was also spent in preparing documentation for both non-tech and tech volunteers who wish to get the editing feature enabled for their project. It was all in all an energetic time, with me hunting for a job along side. I hope to find a suitable job soon, as my time with the internship comes to an end.
It feels really nice to see my changes live in production and am anticipating the onboarding of new languages and projects soon!

Medha Bansal | Stories by Medha Bansal on Medium | 2017-09-02 14:44:49

Completed the initial coding draft. Resolved pending code review changes.
Faced a lot of problems with the RSpec tests and had a discussion with Sage regarding the same.
We fixed the tests in another pair programming session, which is always fun.
We decided that this is an area I need to work upon more and I decided to write few more tests for other files. The interesting thing about tests in this project is that importance is given to functional coverage than line coverage.
Sage believes that tests must test the functionality being changed/new functionality rather than covering every single line of code written, as it is a wastage of time and resources, especially in a project like the Dashboard, where they are already short at hand.

While working on tests, I came across a nice resource specific to Rails tests, which we discussed in one of our daily checkins. But I think the learnings hold true to any MVC framework. Do check out for yourself:

On my work with tests, Sage also shared a resource regarding page objects in tests. It was fun trying to implement these in the Campaign related tests:

Over all, I felt myself grow a lot in regard of writing tests after these weeks.

Medha Bansal | Stories by Medha Bansal on Medium | 2017-09-02 14:23:03

I started making code changes in this duration, and my Sidekiq PR got merged.
Fruitful weeks!
To my surprise, even after a very detailed investigation and several discussions with mentors, I learned much about my project’s codebase after I started making the changes.
I had to rethink my approach of making the edits after I discovered few things that I had understood differently. Would work on the new approach now!

Medha Bansal | Stories by Medha Bansal on Medium | 2017-09-02 11:27:02

The internship wraps up today, and I decided to wrap things up by livening up the AppDB home page with a screenshot carousel.

Normally I'm not terribly interested in purely cosmetic changes; all the previous changes to the layout that I've made have been to improve usability. But the old home page was afflicted with a sad little jpeg surrounded by too much whitespace at the upper right corner. It just was crying out for something better, and the AppDB has an abundance of screenshots.

Bootstrap made setting up the carousel fairly easy, though there were some minor glitches before I got the surrounding text to behave itself. The main problem then was image sizing.

The sceenshots in the AppDB come in many different sizes and aspect ratios. The carousel would display the full width of the screenshots, but size itself up or down as needed to keep the proportions. The result was a very unpleasant jumping up and down of the text below the carousel as it resized itself.

Limiting the height of the carousel and setting overflow to hidden stopped the jumping, but it meant that taller screenshots were all arbitrarily cropped at the bottom, and the results weren't usually pleasing.

Ultimately I decided to select an assortment of screenshots from the AppDB and manually edit them to uniform proportions for the carousel. The results went live a few minutes ago, as I write this.

This will be my last blog post for awhile; there will be one more to report on WineConf2017, which I'll be attending at the end of October. This isn't, however, the end of my work on the AppDB. I have a list of things I still want to do.

Rosanne DiMesio | Notes from an Internship | 2017-08-31 13:41:51

Libkvkontakte version 5.0.0 has been released today and can be downloaded from
Scarlett Clark (Lappy2.0 Debian Packaging)
7C35 920F 1CE2 899E 8EA9 AAD0 2E7C 0367 B9BF A089

The release enables distribution packagers to enable the new features in the latest Digikam release.

Scarlett Clark | Home of Scarlett Gately Clark | 2017-08-31 10:31:06

Dự án Valora Mizuki Park, mua biệt thự, nhà phố Valora Mizuki Park tại Bình Chánh

Với tổng thể dự án lên đến 37 ha thì Valora Mizuki Park lại chiếm ¼ dự án của Nam Long. Valora Mizuki Park có hơn 360 căn biệt thự và hơn 180 căn nhà phố mặt tiền cao cấp, tọa lạc ngay tại đại lộ huyết mạch Nguyễn Văn Linh, nơi quy tụ mọi nguồn lợi của quận Bình Chánh. Dự án này được Nam Long chọn ra với mong muốn đem lại sự mới mẻ, sáng tạo trong việc xây dựng nhà ở cao cấp với nhiều nguồn…

View On WordPress

Julia Lima | Thu thm t uy tn | 2017-08-31 01:56:06

Những giá trị mà Valora Island mang lại cho cuộc sống của bạn

Căn biệt thự Valora Islandđược Nam Long đầu tư xây dựng sau ba dự án lớn là Valora Mizuki Park, Ehome Mizuki Park và Flora Mizuki Park. Ra mắt mới đây nhưng Valora Island lại được khách hàng dành trọn sự quan tâm cho biệt thự với 44 căn độc nhất này bởi vì căn biệt thự này mang nhiều giá trị thiết thực nhất với cuộc sống của con người, đảm bảo tiêu chí 3P của mà Nam Long đã đề ra, không những thế…

View On WordPress

Julia Lima | Thu thm t uy tn | 2017-08-31 01:48:43