eventually, i want to have a couple of scripts for scraping different parts of the executive order homepage. for now, just one two.

this one gets titles and hrefs (to be appended to a base url in a separate program)

# this program goes to the white house's main executive order homepage
# there, it gets the executive order title and href param

# it takes a page number as an argument in the terminal 
# as of 3/23/17, possible arguments are '0' or '1'

import requests, bs4
import sys

whichPage = sys.argv[1]

eoBaseUrl = 'https://www.whitehouse.gov/briefing-room/presidential-actions/executive-orders?term_node_tid_depth=51&page=' + whichPage
eoRes = requests.get(eoBaseUrl)
soupObj = bs4.BeautifulSoup(eoRes.text, "html.parser")
# get all the executive order links and titles
# returns array like this: [<a href>title</a>, <a href>title</a>, ...]
eoLinks = soupObj.select('h3 > a')
# for each link in the array,
for link in range(len(eoLinks)):
  # get just the href
  hrefOnly = eoLinks[link].get('href')
  # get just the title
  titleOnly = eoLinks[link].getText()
  print titleOnly

python eoTitleScrape.py 0 > out.txt

output from running the script twice:

Presidential Executive Order on a Comprehensive Plan for Reorganizing the Executive Branch
Executive Order Protecting The Nation From Foreign Terrorist Entry Into The United States
Presidential Executive Order on The White House Initiative to Promote Excellence and Innovation at Historically Black Colleges and Universities
Presidential Executive Order on Restoring the Rule of Law, Federalism, and Economic Growth by Reviewing the “Waters of the United States” Rule
Presidential Executive Order on Enforcing the Regulatory Reform Agenda
Providing an Order of Succession Within the Department of Justice
Presidential Executive Order on Enforcing Federal Law with Respect to Transnational Criminal Organizations and Preventing International Trafficking
Presidential Executive Order on Preventing Violence Against Federal, State, Tribal, and Local Law Enforcement Officers
Presidential Executive Order on a Task Force on Crime Reduction and Public Safety
Presidential Executive Order on Core Principles for Regulating the United States Financial System

Presidential Executive Order on Reducing Regulation and Controlling Regulatory Costs
Executive Order: Border Security and Immigration Enforcement Improvements
Executive Order: Enhancing Public Safety in the Interior of the United States
Executive Order Expediting Environmental Reviews and Approvals For High Priority Infrastructure Projects
Executive Order Minimizing the Economic Burden of the Patient Protection and Affordable Care Act Pending Repeal

this one gets lines of executive orders:

import requests, bs4
import sys

# manually getting this array from eoTitleScrapy.py until i figure out how to hook these up
hrefParams = [u'/the-press-office/2017/01/30/presidential-executive-order-reducing-regulation-and-controlling', u'/the-press-office/2017/01/28/executive-order-ethics-commitments-executive-branch-appointees', u'/the-press-office/2017/01/27/executive-order-protecting-nation-foreign-terrorist-entry-united-states', u'/the-press-office/2017/01/25/executive-order-border-security-and-immigration-enforcement-improvements', u'/the-press-office/2017/01/25/presidential-executive-order-enhancing-public-safety-interior-united', u'/the-press-office/2017/01/24/executive-order-expediting-environmental-reviews-and-approvals-high', u'/the-press-office/2017/01/2/executive-order-minimizing-economic-burden-patient-protection-and', u'/the-press-office/2017/03/13/presidential-executive-order-comprehensive-plan-reorganizing-executive', u'/the-press-office/2017/03/06/executive-order-protecting-nation-foreign-terrorist-entry-united-states', u'/the-press-office/2017/02/28/presidential-executive-order-white-house-initiative-promote-excellence', u'/the-press-office/2017/02/28/presidential-executive-order-restoring-rule-law-federalism-and-economic', u'/the-press-office/2017/02/24/presidential-executive-order-enforcing-regulatory-reform-agenda', u'/the-press-office/2017/02/10/providing-order-succession-within-department-justice', u'/the-press-office/2017/02/09/presidential-executive-order-enforcing-federal-law-respect-transnational', u'/the-press-office/2017/02/09/presidential-executive-order-preventing-violence-against-federal-state', u'/the-press-office/2017/02/09/presidential-executive-order-task-force-crime-reduction-and-public', u'/the-press-office/2017/02/03/presidential-executive-order-core-principles-regulating-united-states']

# mabes useful later
# allLines = []

baseUrl = 'https://www.whitehouse.gov/'

for param in range(len(hrefParams)):
  # get the page contents
  res = requests.get(baseUrl + hrefParams[param])
  # parse the contents into a bs4 object
  soupObj = bs4.BeautifulSoup(res.text, "html.parser")
  # from the bs4 object, get the class with the title
  title = soupObj.select('.pane-node-title')
  # get just the title text and strip white space
  # from the bs4 object, get the class with the content
  body = soupObj.select('.pane-node-field-forall-body')
  # get just the body text, strip white space, and split on newlines
  lines = body[0].getText().strip().replace(u'\xa0', ' ').encode('utf-8').split('\n')
  # allLines.append(lines)
  print lines

now i wanna do some pattern analysis.

Jen Kagan | ITPPIT | 2017-03-24 02:38:54

I knew that we would arrive here at some point. While I was was worried 3 months ago if I had what it takes to deliver good work during my upcoming Outreachy internship, the empty feeling on how my daily routine would look like without all this, has started to overwhelm me. What if this was a one-off thing? Maybe I was just lucky to have this and might not have the needed ingredients to continue such work in other environments. While I don’t think that any sign of an imposter syndrome might have caught up to me, doubts start to arise frequently when you have to think about “what do I do next?”.

So here I am, 3 months later, but the amount of experiences and skills I gained during this 3 months, would sum up to a much higher timeframe, would I not be part of Outreachy. And I am really grateful for this. Sometimes a small opportunity to get stuff done is all what it takes to have some impact on your life. While I do think that the world doesn’t revolve around Outreachy, the program has a special place in my heart, as it was encouraging to be working with likeminded people, where everyone is more or less on the same page.

Admittedly, my work at Diversity & Inclusion at Mozilla is rather unique, compared to the mostly technical positions most other Outreachy projects have. I do however feel that this was a refreshing change, which offers more Inclusivity for non-programmers contributing in open source project. I hope to see more of that in the upcoming editions, especially from Mozilla, which reflects a diverse culture of contribution opportunities. It would be great to see such culture adopted by more open source projects in the future. Judging from my own experience, there are a lot of non-technical people who are already contributing in open source, whose value is often not lesser than the one of a programmer. I’m looking forward to see projects coming closer to this mindset.

I hope to see Outreachy doing more efforts in the transition phase after the internship though. Mentoring interns in what they can do in their upcoming endeavours would be something which would have a great impact on them. In my example, I already had a few years of background in the Mozilla communities. Someone else might not be that privileged however. This is where mentoring would make a rather big difference.

I want to thank everyone who has helped me during the past months, regardless if it was for a Focus group, or actually a few nice reassuring words which are always great to hear on an adventure like this. I hope you will stick around, as I definitely will.

Kristi Progri | Kristi Progri | 2017-03-23 16:55:45

I am just wondering when Grace Hopper Conference (GHC) which is one of the largest conferences for women technologist will start paying its speakers. I have organized both small and large conferences and have always prioritized creating an inclusive space. The creation of an inclusive conference goes beyond what is built for the participants but also includes the speakers and facilitators.

I understand when small conferences are unable to provide payment to speakers and facilitators (guess what though AlterConf has done a wonderful job at paying no matter what). When it comes to large organizations I have absolutely no patience for organizers that expect free labour from speakers or any facilitator. The worst part about GHC, is that it’s a space for women identified technologist. How can we centre and celebrate women if we are not paying them for their labour? It completely counteracts the narrative of celebrating women for their achievements if we expect them to work for free, pay for their travel and accommodations. Sadly, speaking in front of thousands for free does not pay the bills and will never pay the bills.

I asked GHC via twitter what’s going on and this was their response to the rising Sponsorship rate (and not paying speakers)

I thought this response was funny. I get it, they are defensive about people saying anything but this is the part that got me “ $ generated by GHC is used for programs that help women technologist all year” so what exactly happens during the couple of days in the year when GHC occurs?

Based on the goals of GHC , I am the “targeted audience” a young black woman working in technology but I have personally never attended GHC. I am critical of the spaces I enter, and I believe it is my role as participant to hold an organization accountable for their actions. I have heard that GHC doesn’t pay their speakers, and I have also heard about the difficulties students have attending. Given those facts I haven’t bothered to attend, and don’t plan on doing so in the future.

I do believe that GHC provides an opportunity for young women to have access to recruiters, and meet amazing women doing great work in the field. I think it is important to be critical, and recognize the individuals who aren’t able to attend or speak. GHC is the largest gathering of “women technologists and the best minds in computing convene to highlight the contributions of women to computing” according to their website. As the leader in the field, it is integral for GHC to pay it’s speakers. Paying speakers provides compensation for their labour, recognizing that labour is not free and it is important.

Well the wage gap is pretty intense even within technology… so why wouldn’t we support paying women. Women within the tech field make 28.3% less as computer programmers in 2016. I wish it was rocket science but sadly it isn’t, GHC gains sponsorship from all the big organizations and should prioritize inclusion practices. The diamond rate for sponsorship is over $100,000 which could pay several speakers.

The trend of not paying female speakers shows how far we have gone in regards to Diversity and Inclusion within technology. This problem stems from a greater issue and trend of focusing on white cis women, who often have financial capacity or resource to work for free. I am not saying all white women have been afforded that opportunity, but when we don’t pay speakers we EXPECT them to have the financial resources to pay their way. I often think about Nicole Sanchez’s article “Which Women in Tech?” Sanchez shares that we often centre a particular narrative when it comes to women in tech. GHC can do better, and we should expect them to do better as “leaders” in the field.

Next steps:

If you work for an organization that sponsors GHC please ask them to look into, or request for speakers to be paid. Here is a list of companies that have currently sponsored

If you are attending GHC this year send a note to the organizers asking about speakers.

So GHC when are you going to start paying speakers?

Nasma Ahmed | Stories by Nasma Ahmed on Medium | 2017-03-21 18:52:50

Hey all,

My Outreachy internship primarily involved tasks related to improving the io-stats translator of Gluster.

So what is a translator ?

I have shared a video in my last post regarding "Hacking Gluster FS" in which they explain it clearly. But to put in brief, The translators convert the requests from users into requests for storage. It is layered in such a way that the request passes through them. A translator can also modify requests on the way through.

The io-stats translator on which my tasks were related to is used to instrument the Gluster from within. The volume profile command provides an interface to get the per-brick I/O information for each File Operation (FOP) of a volume. The per brick information helps in identifying bottlenecks in the storage system.

To add a brief on my tasks,

1) As stated above, the profile command provides an interface to get the per-brick IO information for each file operation of a volume. So it is necessary that all file operation are listened for. To achieve this, my first task involved in finding the missing file operations and adding it to the io-stats translator.

The commit related to it can be found here.

2) While doing the above, my mentor suggested that doing the above using a code-generation method will help in reducing the code in Gluster code as many lines of code for each file operation looked similar and can be generated by means of a common template.

An initial step towards achieving this is to have all details related to every file operations together. In this attempt, we found a file operation in the name of "compound" is missing in the list. We attempted to add it. The related commit can be found here.

3)With the support to all the file operations added, our next step was to make the code generation framework work. The related work can be found here. We had few issues with it and the same can be found in the comments of the commit.

With this completed, my next tasks involve in making the Gluster profile command more useful. I shall share my learning and work related to it in my next blog post.

Menaka M | Hey, I'm Menaka | 2017-03-19 19:58:58

My Outreachy internship ended last week, March 7 (6 in the US). Three months have gone by fast, part of me still can’t believe I finally upstreamed my first driver.

Since last update, I worked on supporting both I2C and SPI by factoring out common code and using regmap API to handle relevant calls of the I2C/SPI subsystem. I also got the opportunity to learn about ACPI and the Device Tree which are necessary in enabling enumeration of the sensor especially when using SPI protocol1. The last few weeks I worked on writing support for triggered buffer where I get to explore interrupt handling, IIO trigger and buffer. I submitted the patchset this week and will be working on revisions from here on.

I’m happy and content of the work I’ve done. I learned a lot out of this experience and I’m very grateful of this opportunity to contribute to the Linux Kernel. I can’t imagine what it’s like if I haven’t taken “20 seconds of courage” to join the outreachy-kernel mailing list.

The internship journey was fun and nerve-wracking at the same time. Things were not always smooth sailing:

  • Wiring mishap leading to -EPROTO error.
  • Internet connectivity issues (has always been stable, why does it have to occur during the internship???) – this lead to moving on into another ISP provider.
  • Got sick on one of the weekends. It was stomach pain that made me uncomfortable for a few days. Doctor’s hunch it could be Hyperacidity. Took the prescription and got better. Have to avoid coffee and spicy foods during this time which is sad.
  • I cannot get some stuff to work. These are instances where Daniel (one of my mentors) would connect to my computer to help troubleshoot things.

These incidents diversified the experience. On the upside, working on the patches led to exploring other parts of the kernel. I didn’t expect I’d get a chance to submit patches about device tree or regmap. I don’t think it won’t be the last though since I plan on continuing the project on my spare time. So yes, there will be more adxl345 related-posts in the future.

I would like to thank the organizations that made this program possible, the Linux Kernel for accepting me as an intern, my mentors Daniel and Alison for the guidance, and lastly Jonathan (IIO maintainer) and other people who comment on my patches for their insightful code reviews.

  1. I2C can be instantiated from user-space without having to rely on ACPI or Device Tree. 

Eva Rachel A. Retuya | Rachel's Blinking LEDs | 2017-03-17 00:00:00

Hey there! Sorry for the late response, feel free to email me: hi at shubheksha dot com

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-03-16 17:49:47

Yes, definitely! Feel free to email me: hi at shubheksha dot com

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-03-16 17:49:05

Huh, fascinating. Didn’t occur to me that rectangles had the option to be links. Thanks!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-03-16 15:02:03

I’ve been working on making a portfolio of what I did for the Fedora Regional Hubs project. Did you know I did a _lot_ of stuff?

I mean, I was definitely busy getting things done throughout. I knew this. Summarizing what I did in a way that someone else can follow is surprisingly complicated. There’s a lot of information scattered around my email, pagure tickets, throughout this blog, and on my latop.

I’m really glad that I was blogging the whole time, because it makes it a lot easier to reconstruct what I did. But boy. This is very time-consuming!

I am glad that I decided to try starting with a presentation outline: while the presentation isn’t done, it’s started, and it helped give me focus for the summarizing I have been doing.

I’m also glad that I am able to ask Alex Feinman for feedback, as it has been very helpful to be able to talk to him about it. And that I could ask a million questions of Mo both during and after the internship.

Even just the high level outline I wrote last night in a fit of comprehension looks like a lot:

Preliminary research

  • Competitors
  • Fedora Hubs
  • Contextual interviews


  • Processing raw data (transcribe, summarize, top 5 pains, problems, workflows)
  • Brainstorming with others (whiteboards, notes on brainstorming session, sketches to start with)
  • Questions
  • Affinity mapping
  • Prioritization
  • Deep dive/brainstorm on top questions (don’t forget survey!)


  • General sketches & wireframes & tickets (invitation page, login, etc) — after general brainstorming session
  • Specific sketches & initial wireframes & tickets (search/filter pages for people, events, Hubs. Notifications/widgets)
  • Find holes, enter tickets.
  • Additional discussion as needed/follow-ups in weekly meetings/blog posts

Feedback and iteration

  • Feedback from Mo on wireframes, discuss, adjust
  • Feedback from sayan on feasibility
  • Locations!
  • From participants?

Usability Sessions

  • Prepare for usability sessions.
  • Prototypes need connections among themselves.
  • Identify tasks
  • Prioritize tasks
  • Identify and contact participants (who, why?)
  • Usability script
  • Usability sessions

Analyze feedback

  • Transcription, summary, highlight important
  • Spreadsheet!
  • Discuss with Mo

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-03-15 20:50:49

Maybe you would quit your job to become an artist? a musician? a photographer? a doctor? a teacher? Maybe you would travel the world with just a backpack and some dollars. I want to share my story, deciding to put my fears aside and making one bold move changed my life forever. I left my career as a management consultant to become a developer.

I made the decision on a Sunday in April 2013 after spending the whole day reading about development bootcamps. By then I knew I loved coding as I had been building tools for my team using Visual Basic and was really enjoying myself taking online coding lessons. I had also recently build upgrade.pe It wasn’t an easy decision though. For months, I’d been asking myself puzzling questions: Should I leave my job? It took a lot of time and money to get my degree and subsequently this job, which is a good job. Is this too crazy? Should I keep coding as a hobby? Is it too late? How am I going to afford this?

One of the things that kept me from making the decision was a fear of making (another) mistake. See, back in 2011 as I was trying to find my true calling I had quit a great job in order to take a job as an e-commerce analyst for a hotel chain in my country. I thought the job would take me closer to technology which is the field where I knew I belong. Unfortunately the job ended up being a total fiasco and the risk I took by choosing it did not pay off at all. So as you may imagine, by Abril 2013 I wasn’t as confident to follow my gut as before.

After much deliberation I finally decided to apply for a bootcamp in Canada, HackerYou. They had a 9-week Ruby on Rails bootcamp hosted by no other than Shopify and they were offering scholarships. Being awarded the scholarship was crucial to deciding to quit my job and move to Canada to attend classes as I wouldn’t have been able to pay for the course otherwise.

The nine weeks I spent in Ottawa completing the bootcamp are without a doubt among the best weeks of my life, and that is saying something considering I almost got pneumonia on my first week there. I love Canada, it really is a place where you breathe safety and respect. I learned a lot about Rails but most importantly met amazing people who inspired me with their desire to get the most out of the program. Each classmate had a different story but we all shared one thing, we really wanted to be there. We had decided we wanted to spend nine weeks of our lives devoted to learning.

Learning Rails in Ottawa was just the beginning of this crazy adventure. After that I founded a startup which failed but that taught me so much. You can read about the next part of this adventure here: When failure is good (coming soon).

Andrea Del Rio | Stories by Andrea Del Ro on Medium | 2017-03-15 08:02:54

I appreciate this! At the moment, I’m still not sure if it’s possible to have an entire line in a table be clickable. I did figure out how to tell people to turn on ‘clickable’, although sadly after my last usability session. Next time, clearly.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-03-14 18:09:37

Rails default logger is quite simple.but it becomes quite difficult to debug in production as the app grows big. Imagine  you have 5 different app servers and you have to grep all the log files. Some simple shell scripting could ease the pain but still log file is quite messed up and it still wont help you to analyse  which url returned 500, time taken by the api etc. It would then be great to have all logs in a centralised location easing up the pain of developers. Yes ! There is a gem for that! 🙂

Log4r is a powerful and flexible ruby gem used for logging in ruby inspired by the popular java library log4j. It supports multiple output destinations per log level, custom log levels etc. Checkout the library .

You might also need to check graylog. Graylog is an open source log management solution that can be used to monitor your logs.  It is built on the top of java, mongodb, elasticsearch . To know more about it click here. Graylog also comes with a web interface. Check out the link to see to setup graylog web interface using nginx as a reverse proxy .

I used gem  ‘log4r-gelf‘ along with log4r to sent my logs to graylog. All log4r and  ‘log4r-gelf‘ in your gem file

gem 'log4r'
gem 'log4r-gelf'

and do

bundle install 

2. Now create a yml file log4r.yml.

 - name: production
 level: INFO
 trace : false
 - production
 - gelf

 - name: development
 level: DEBUG
 trace: true
 - development

 - type: DateFileOutputter
 name: production
 filename: production.log
 dirname: "log"
 date_pattern: '%H:%M:%S'
 pattern: '%d %l: %m '
 type: PatternFormatter

 - type: GelfOutputter
 name: gelf
 gelf_server: "example.graylog.in"
 gelf_port: "12219"
 level: INFO

3. Next in the application.rb

require 'log4r'
require 'log4r/yamlconfigurator'
require 'log4r/outputter/datefileoutputter'
include Log4r

module ChillrApi

 log4r_config = YAML.load_file(File.join(Rails.root, 'config', 'log4r.yml'))
 YamlConfigurator.decode_yaml( log4r_config["log4r_config"] )
 config.logger = Log4r::Logger[Rails.env]


Tessy Joseph John | tessyjohn | 2017-03-13 19:16:17

Last Monday ( March 6) was the last day of my Outreachy internship. And I wouldn’t be exaggerating at all if I said I have learnt more in the past 6 months ( including the application period) than all of the knowledge combined in my four years of engineering. Sad but true.

During the internship period I worked on the design of the Firewall UI for Cockpit. I started with Pencil for making the mockups and moved on to Inkscape. Although I worked on implementing the UI simultaneously,it didn’t turn out to be as good as I expected it to be. We also did a lot of usability testing on remote as well as local users and this was one of the high points of this internship.

  • UX design is all about details and consistency!

Initially,while using Pencil I would miss out on a lot of important details since there wasn’t room for much modifications. But once I moved to Inkscape,it was overwhelming to keep in sync with all the details and yet to remember the larger picture. It is easier to start with the larger elements such as frames and placeholders and then moving on to the other smaller details like text details and font. Also,fixing the small details like width,height and alignment in the end can be a pain. It is less time consuming to keep modifying the changes along the way since it would also maintain consistency and there are lesser chances of missing out other elements that had the same specifications.

  • Usability feedback is the key!  

I wrote about this in my previous post and I am saying it here again. The usability testing provided excellent and unbiased feedback about the user interface and the design. A different perspective from possible users helped us to identify the pain points and work on it in the further iterations.

  • Open source is all about community!

When I started my application process,I was absolutely terrified of talking on the IRC. Not only because I was an absolute stranger but here I was in a room of experienced people who are experts in their fields. I would check my message a hundred times before sending it. Although now I don’t over-think as much as I used to before sending a message,I still have to reach the point where there is no over-thinking before participating in the IRC. But I had awesome mentors who helped me at every point and made me confident about interacting more with the community 🙂 I think this is something that probably all newcomers go through.A helpful pointer would be to remember that everyone is very helpful.

  • Take chances!

When Outreachy was suggested to me by a friend,I thought I would never be able to get an internship with Outreachy. The bar was set high and judging myself,it seemed that I still had a long way to go. But I took a chance and applied because learning more about the open source community and contributing to it was important to me. So,to all the future applicants to Outreachy,I would only say that give your best and a bit more than that.

What next?

While I will continue to contribute to Cockpit,I am more than ever excited to learn more about design and how open source perceives and implements design. Then there is also a goal to spread more awareness about user experience and how it affects our day to day life. Since I am still a newcomer to the world of user experience,there is a lot to learn and implement. Also,I signed up for a UX specialisation on Coursera so there will be updates about that too. There might be a lot of other side projects too that I might work on but we’ll talk about that when we come to it.


Bhakti Bhikne | Bhakti Bhikne | 2017-03-12 15:32:18

Что такое Outreachy?
Это ежегодная программа, которая предоставляет возможность в течении 3 месяцев удаленно работать над open source проектом и получать за это денежное вознаграждение. Сумма может варьироваться, на 11-й раунд она составляет $5500. При чем, можно не только писать код, но и заниматься дизайном, документацией, маркетингом и другими видами деятельности, приносящими пользу open source сообществу. Изначально, целью стипендии было поддержка женщин, которые занимаются разработкой ПО (сейчас, как известно, большинство разработчиков являются мужчинами). С недавних пор, стипендия поддерживает больше underrepresented minorities. Спонсируется некоммерческой организацией GNOME.

Подхожу ли я для стипендии?
 Вам нужно
  • знать английский хотя бы на B1 (то, что как минимум дают в школе)
  • быть старше 18 лет
  • не быть резидентом Крыма, Кубы, Ирана, Северной Кореи, Сирии или Судана и идентифицировать себя как женщину, транс-мужчину или гендерквир (включая гендерфлюид или не идентифицировать себя ни с каким полом).
  • принимать участие в этой программе в первый раз, а также не принимать участие в Google Summer of Code до этого.
  • иметь возможность работать 40 часов в неделю в течении периода программы (30 мая -- 30 Августа 2017).
  • иметь право работать в стране нынешнего проживания.
  • не проживать в стране, на которую действую экспортные ограничения или санкции США.

Как подать заявку?
Вкратце: с февраля по 30 марта 2017 длится отборочный тур, вам нужно
  • найти подходящий проект
  •  сделать для него небольшое пробное задание 
  • дожидаться результата, паралельно общаясь с обществом и пытаясь вникнуть в таску, которую вы хотите в этом раунде выполнять.
 Конечно, в идеале вы должны около года контрибьютить в проект, делая задания все сложнее и сложнее, и, когда вы окажетесь контрибтютором средней крупности, подавать на стипендию, но в первый раз можно попробовать и так. Подробнее, см. здесь.

Как получить стипендию?
Лучше сказать я не смогу (тыц).

Как обустроить workflow на эти 3 месяца?
Окей, если вы подали заявку и получили стипендию, сейчас начинается самое сложное. Если вы работали у себя дома, то вы знаете, как часто вашим близким нужна ваша помощь, когда у вас самый разгар рабочего процесса. Обустроить все за вас я не смогу, но у меня, после опыта с Outreachy, есть некоторые tips&tricks:
  1. Работайте за столом, диванчик/кровать не располагают к работе, вам дадут задание, которое у вас займет по 40 часов в течении 13 недель, как думаете, у вас будет время прокрастинировать? Вы или соберетесь или провалите все к чертям (и денег не дадут). Байдикування -- це іграшка диявола, еге ж!
  2. Контактируйте с ментором как можно чаще, если у вас (как и у меня) проблемы с самоорганизацией, или вы просто впервые работаете дома, попросите ментора контролировать вас каждый день. Каждый день вечером пишите ему отчет о проделанной работе и планы на следующий день. С улучшением самоконтроля, можно переходить на спринты длиной в полнедели, а в случае открытия третьего глаза, отчитываться можно и раз в неделю.
  3. Скорее всего, работать в проекте вы будете не одни, кроме вас будут еще участники Outreachy, а кроме этого еще милионная армия open source движения, которая делает такую же работу как вы, но каждый день в году и бесплатно. Не стесняйтесь говорить с опытными разрабами, с другими новичками, в open source нет никого, кто не захотел бы с вами обсудить новую фичу или посоветовать подход к решению того или иного бага. Обращаться, по традиции, можно на ты. Даже к Столлману =) Основные канала связи индивидуальны для каждго проекта, но обычно это irc и рассылки писем.
  4. Объясните домочадцам, что вы работаете, это не просто хобби, вам за эту работу платят деньги. Обозначте время, в течении которого вас нельзя тревожить, если только не горит квартира, например с 8 до 12 и с 2 до 6. И приучайте их к мысли, что вы будете в обозначенное время недоступны в течении 3х месяцев.
Эта запись в блоге является выборочным переводом подробной инструкции, искренне вам советую прочитать ее полностью. Мне не остается ничего, кроме как пожелать вам удачи! Передавайте всем, кто по вашему мнению должен знать о программе.

Asal Mirzaieva | code. sleep. eat. repeat | 2017-03-11 01:08:34

This Monday was the last day of my Outreachy internship. I’m sad it ended, but tremendously happy I had this experience! I consider myself incredibly lucky for the great and talented people I worked with, and especially my dear mentors, Dustin and Brian, who taught and helped and encouraged and advised and cracked jokes and patiently explained and just were always there for me.

I learned so much during these three months, I don’t even know where to start, but let me try:

  • I learned what TaskCluster is, and what is CI in general;
  • What REST APIs are, how they work, and how to write them;
  • The basics of how and when to ask questions (this topic is the most difficult in this whole programming business!)(well, naming things is also difficult though);
  • How to do `git rebase` properly, and in general, how to work with remote repositories collaboratively;
  • Million of small things on how to communicate with people;
  • Basics and importance and problems of testing;
  • More of JavaScript, more of React, more of npm, some Express, some Azure, some Mocha, some Less;

The major highlight of the internship was the trip to Toronto. I may not have been as much productive there as I would like or had hoped to, but it was lots of fun and personal growth and new faces. Another highlight was the Tech Topic, where I talked about GitHub integrations 😊

To all those who are in doubts whether to apply for Outreachy or not, my advice would be: Yes!! Definitely apply! If you have any question for me, or need any advice — just drop a line!

Irene Storozhko | Stories by Irene on Medium | 2017-03-09 20:30:52


Graylog2 is a powerful tool for log management and analysis tool. One such use case we had in my company is collect all logs of rails application running in 5 different servers in a single location so as to make debugging easy. It is built on the top of ElasticSearch, MongoDB and Java. First you need to set up graylog on your server. These links are likely to help you.

Once it is setup you want to access the web interface. It is running on port 9000.You could actually use a single port to connect with graylog REST api and web interface or two separate ports. This is my nginx configuration.

location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Graylog-Server-URL https://example.in/api;

location /api/ {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

Graylog Configuration

web_listen_uri =
rest_listen_uri =
rest_transport_uri =

Graylog as a very good community at https://community.graylog.org/. You may post your issues there .

Tessy Joseph John | tessyjohn | 2017-03-09 18:20:24

My entire life, I have been prone to getting stuck in the details of a thing. It’s one thing to know this, and another to continuously find ways in which it affects me and for which I have developed workarounds.

I’ve learned to recognize that sudden exhaustion means that I’m stuck in the details and need to take a break.

Whether it’s my surroundings, a particular task, or a website, details are likely to distract and overwhelm me. This may or may not tie in with my ability to notice details that others overlook.

I suspect it’s much of why I have trouble with clutter in my surroundings, whether at home or in stores and restaurants. I suspect it may also relate to my difficulty with noisy surroundings.

I was reminded of this tendancy of mine during the recent outreachy project. Of course, I was also reminded of the many workarounds I’ve developed to handle it.

CSS and Coding

When working on CSS, I started to wonder if the difficulty that I have with writing code is purely about the number of details involved. I understand coding fairly well, and have no problem talking about it with those who do it for a living. At the same time, trying to write code usually results in me being exhausted and frustrated, and not actually successful at creating the code. Modifying code is always much easier, I suspect because there’s simply less to deal with and I am less likely to get stuck.

Using CodePen helped, I suspect in large part because I could see the immediate effects of what I was doing. My strong tendency to break problems into smaller pieces also came into play, as when I was stuck on a particular aspect of the CSS, I’d just clone my Pen and take out the bits that weren’t currently relevant.

Transcription, summarizing, and brainstorming

Transcription and other detail-oriented tasks

Transcribing from audio or video means I’m faced with a wealth of information that needs to be expressed in a written way. For the first in a set of items that need transcription, I always find myself getting stuck and writing down _way_ too much stuff.

I tend to need frequent breaks when transcribing, simply due to the sheer amount of information and the fact that I will start to forget what’s actually important. After I’ve done the first in a set, it is usually much easier for me to identify what’s important and what’s not, so the rest will go more quickly.

Similarly, when working on a task that is part of a larger project — as most tasks are — I can easily get stuck in the nitty-gritty of the task and forget why I’m doing it. This makes it harder to actually perform the task due to being stuck and to not remembering the purpose.


One of my most effective workarounds, in addition to frequent breaks, is to summarize what I’m doing and what I’m learning. Whether it’s in a blog post, as with the Regional Hubs project, or in talking to others involved in the project, summarizing and explaining what I’m doing never fails to get me back out of the details. Of course, it’s also typically useful for the people with whom I am conversing and for my own later use.

It is typically easier for me to write than talk my way out of being stuck, as long as I write as if I have an audience. And Medium’s interface is _fabulous_ for this. It doesn’t get in the way of what I’m trying to say, and is minimal enough to not itself act as a source of distracting details.

It’s also helpful to have written logs of conversation, which can be harder to get with people I’m speaking to in person. I retain what I read much more easily than what I hear. For this reason, having a remote job can be useful, because most conversations are written and often easily logged. This is also why I tend to try to take notes during conversation, or ask people to send me written reminders. Similarly, I’m trying to add the habit of sending a written summary of what I understood from spoken conversations when I am concerned that I missed something.

I strongly suspect this need to summarize and explain to get out of the details is why I am good at explaining things. Lots and lots of practice, plus that being how I understand things better.

I also strongly suspect this is why I so badly want other people to work with or near: other people and the need to explain what I’m doing help me stay grounded in the overall purpose of what we are doing.


Like discussing what I’m doing with other people, brainstorming with others is fabulously useful, especially if they know something different about the topic than I do. While I might get stuck in the details when investigating something on my own, having someone else there means that they might not get stuck, or at least we can work together to pull ourselves out of rat holes.

Of course, brainstorming also brings in the wonderful thing called ‘other people’s perspectives’. No one can think of everything, no matter how hard they try. Involving other people means that together you have a good chance of coming up with things that work better than what either of you would come up with alone. People are very good at building on each other’s ideas, and often find it enjoyable, as well.

Data Analysis

Analyzing data typically involves a great deal of detail work. There is usually a great deal of data, and it’s all too easy to lose track of the big picture of why the data was collected in the first place and what the goal actually is.

I _love_ that analyzing data in the UX world is often a group experience, whether through affinity mapping, brainstorming, various methods of prioritizing, and other things that aren’t currently coming to mind. It means that I don’t get stuck as often.

In grad school, analyzing data was often an exercise in figuring out ways to not get stuck in the data and remembering what I was there for. Analyzing data alone is not good for my mental health, as I’ve not yet found useful ways to keep myself on track for long periods of time. Statistics are hard for me, not because of the math, but because I have trouble remembering what to do when or why.

I also love that in UX there are often diagrams to remind you what research methods are most useful when. I’m sure that’ll come more easily to me with practice, mind you.

Speaking of research methods…

Learning UX is full of details

I think the biggest problem that I had when trying to learn UX on my own was the sheer amount of information. Having an internship and people to work with means that I had a way to focus.

Pre-internship, having had projects that I was working on didn’t help enough in terms of focus, because there were so many options.

I tried to write blog posts about what I was learning, as you can see early in this blog. Much of the time, writing the blog posts meant that I kept finding out how much I didn’t know yet, and how much trouble I was having figuring out what to learn first.

There is a _lot_ to UX. I have the skills to do it, I know for certain. It can be daunting navigating the sea of possibilities to identify what I should focus on.

Visual Design is full of details

I suspect the trouble I have with figuring out visual design is that it’s full of details. At least when I was trying to do things in Inkscape, the sheer quantity of things that you can do meant that I often had no idea where to start. Even once I understood that there were sample style patterns in the hubs design github, there were still a lot of possibilities.

I have no idea what’s important to pay attention to in visual design. I don’t know how to tell what’s a thing that needs to be the same always (nor do I know how to make sure that’s true), and what the range of ‘reasonable’ is. And the number of tools in professional drawing programs is absurd. If I don’t know what I need, how would I possibly know what tools to use, when?

I’m sure this is a tractable problem to solve. At the moment, though, it’s an especially daunting one. It is probably not aided by my lack of visual imagination or memory.

I think this is why I’m so happy that Balsamiq exists. The number of tools available is much more limited, and rather than trying to guess what something should look like, there are a number of items that already have a template for you to use. Indeed, working in Balsamiq is kind of like having a lot of small templates that one can use as building blocks, rather than making it up as you go.

I worry that Sketch will be too flexible. I won a license for it, and I should have access to macs in my household that I can play with it on. Indeed, after this internship, I am somewhat more comfortable with the idea of playing with it. I have some visual design knowledge just by frequently referring to the protoypes that Máirín Duffy made.

Portfolios are full of details

At the moment, I am trying to distill what I did in this internship into a portfolio format. I keep finding myself stuck in details, so I’m thinking that perhaps it makes more sense to create a presentation, first. I’ll need one regardless, and those force you to stay big picture.

In closing…

The world is full of details!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-03-09 16:08:39

This is my last blog of Outreachy. During this period, I have finished the Chinese translation of GNOME 3.22, and completed most entries of GNOME 3.24, because it always emerges some new entries, so I talked with Mentor Tong and decided to accomplish 3.24 after the frozen-date and before the release-date. On the other hand, I improved the guideline of the Chinese Team – updated it on the basis of the last English vision and reference something from Free Software Localization Guide for Chinese (China).

About the future, I’ll completed GNOME 3.24 with other translators before 22th March, and I have some other ideas about the guideline of Chinese Team, I’ll try to implement them. Besides, I want to try making more contribution than L10n for GNOME, and I would love to become a module maintainer in this community which will making me more deeper at GNOME. Also, I’m trying to spread Outreachy to others, especially the Chinese girls.

I’d like to tell the applicants of round 14 or someone who want to apply it, DON’T BE SHY, show your personal abilities to your mentor as more as possible, and don’t be confined to the test she/he gave you. And if you are selected, an important thing is getting to know more people of your organization and many other interns, it will help you become fully integrated into the circle of FOSS.

Above all, that is a wonderful experience. Outreachy gave me a nice opportunity to learn and contribute to FOSS, thanks for the people who help me in this internship.

This work by Mandy Wang is licensed under a Creative Commons Attribution-ShareAlike 4.0 International

Mandy Wang | English WoCa, WoGoo | 2017-03-09 14:27:42

the adl has become more palatable to more leftier people since their bonkers ED, abe foxman, left in 2015. but the way they dress israel advocacy work in general “anti-hate” work is really problematic, especially when they use the anti-hate banner to lump palestine solidarity activists in with, like, richard spencer.

where is some of this lumping happening? in the blog tags!

i wrote this thing to get blog posts off the old adl website. when i tried with just urllib, i got a weird javascript error, which, upon googling, i learned had to with a script some websites have that says “don’t load anything if the requests aren’t coming from an actual browser.” the workaround appears to be using a headless browser. i think what’s happening below is that phantom.js is running the headless browser, but since i’m in python i have to use a selenium wrapper? i’m not positive why i need both.

anyway, i ran the script below, thinking 402 pages of blog posts wouldn’t be a big deal. 402 pages of blog posts is, in fact, a big deal. so i changed the params to only grab pages 360 to 402.

# $ npm install phantomjs-prebuilt
# $ pip install selenium

# http://stackoverflow.com/questions/13287490/is-there-a-way-to-use-phantomjs-in-python
from selenium import webdriver
driver = webdriver.PhantomJS(executable_path='node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs')

base_url = 'http://blog.adl.org/page/'
# create a param list that includes pages 360-402
params = list(range(360,402))
# create a txt file to hold everything
txt_file = open('adl-posts.txt', 'wb')
# for each item in the list
for p in params:
  # get the url of the blog post
  driver.get('http://blog.adl.org/page/' + str(p))
  # get everything in the id="content" tag
  element = driver.find_element_by_id("content")
  # encode it and write to txt_file
#close the txt file  


cool! i grepped ‘students’ but then thought ‘tags’ were more interesting.

grep 'tags' -i adl-posts-pages-360-402.txt > tags2.txt

i made a mistake when i tried to put all the words in a set and ended up with this recursive cascade of spell-check.

but this worked:

cat tags.txt | python toSet.py

import sys

count = {}

for raw in sys.stdin:
  raw = raw.strip()
  tags = raw.split(', ')
  for tag in tags:
    if tag in count:
      count[tag] = count[tag] + 1
      count[tag] = 1

for key, val in count.iteritems():
  # if value appears more than once
  if val > 15:
    # print it
    print key + ": " + str(val)


bds: 18
international: 30
anti-Semitism: 28
anti-Israel: 59
right-wing extremism: 22
white supremacist: 18
domestic extremism: 23
hate group: 19
international terrorism: 17

ugh. fuck that.

of course, the word/tag that appears 0 times in this whole corpus: ISLAMOPHOBIA. so an acrostic-ish. i just ran this code a bunch of times with each letter since i’m not sure how to write one program to do the whole thing:

import sys

for raw in sys.stdin:
  raw = raw.strip()
  sentences = raw.split('. ')
  for sentence in sentences:
    count = sentence.count('A')
    if count == 1:
      if len(sentence) < 100:
        print count
        print sentence

example output per letter before i added some additional filters:

the “poem”:

But I know who writes your pay checks.”

which maligns and debases the Jewish State.

ADL has repeatedly urged the UN to defund and disband it.

A rundown of the call:

Efforts by Methodist anti-Israel activists to divest from Israel began in 2008

One of the activists argued that these demolitions are made possible by U.S

The boycott effort was spearheaded by a Dubai-based Palestinian novelist named Huzama Habayeb

Jewish Voice for Peace Promotes Anti-Israel Hagaddah

One of the men,

The event hosted a former Israel Defense Forces (IDF) soldier, Sergeant Benjamin Anthony.

The Muslim Student Union at UC Irvine is at it again.

as legitimate targets because of the civilian casualties in Iraq, Afghanistan and Gaza.


weird thing:

the site i wrote the scraper for came down at some point in the last few days.

mysteries. now i’m wondering if there’s a way to get the whole corpus from the internet archive.

Jen Kagan | ITPPIT | 2017-03-09 03:38:48

Getting the in-person stuff to work was crazy complex. One thing for remote testing: it’s a lot easier to organize!

Thanks for being willing!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-03-08 20:25:51

Overall, I loved it. Sure, there were annoying bits, but there are always annoying bits no matter what the job is.

Good things

There were lots of good things!

First, I had the best mentor. It helps that I already knew her, and that she offered to be my mentor when she suggested Outreachy to me. She’s been unfailingly helpful and kind, and very supportive.

At the very start, she offered me the choice of working on small scale existing UX tickets in Fedora, or doing a full-fledged project. The former would have been easier, in some ways, but not nearly as useful for progressing my career. The former would probably have been easier, if less useful, for her as well.

Second, the fedora-hubs team is a good group of people. Welcoming, helpful, and unfailingly polite. I may have only been there for a few months, but I will miss them.

Fedora people as a whole were similarly helpful; I had nothing to offer my interviewees and participants but my goodwill, and everyone I asked was happy to help out when they were able.

Third, the task was an interesting one. I think at this point I’d probably describe Fedora Hubs as a whole as an interface that consolidates and filters information about and from many different places so that an individual can find what’s important or interesting to them within Fedora. I probably need to throw something about making it easier for new Fedora users to get involved, although it’s hard to say if that’s Hubs as a whole or specific to the Regional Hubs that I was working on. Or both! Probably both.

I’d say the overarching goal for Regional Hubs was to encourage and support community within Fedora. Some of the problems that we were trying to solve were as simple — but not easy — as helping new users more easily get involved with the Fedora community, encouraging in-person social interaction to help people become and remain connected, and helping people find each other and events. Some of these we knew were problems ahead of time (like new users getting and staying involved), and some came up during the interviews (finding people and events).

As some of you likely saw while reading along, locations are hard. This made for a very interesting discussion to figure out how we wanted to handle that, and there are still aspects of it that I suspect need more attention. However, if we want people to be able to find people and events near them, locations are also really important.

I most enjoyed the discussions in which we were exploring the bounds of what we needed to know or do. This included brainstorming in general, the aforementioned complications around locations, and the conversation around the feasibility of the mockups in which we touched on how Hubs might suggest new regional hubs.

Neutral things

I didn’t really get a chance to learn more about visual design and how to translate from a mockup to a higher fidelity design. This was as much about available time as the difficulty of explaining it. I do have an example of the before and after versions of this for one of my mockups, and Mo has sent a screencap of creating mockups in inkscape. Hopefully these will be useful!

I didn’t finish creating the CSS for the high fidelity visual design that Mo had already created. I got stuck on translating from table to div, and needed to focus my attention elsewhere.

Less good things

First, I really don’t like working remotely. I like people, and having people around is good for me. I also like being able to talk to people about what I’m working on and have them already have the context and knowledge to have productive conversations. This is still possible remotely, but there’s something missing from it in that context.

Second, and relatedly, I feel like remote usability tests and interviews are not as good. They do the job, for sure, but I feel like I missed out on stuff by not being _there_ with the participants. This is likely not helped by the connection to some of the locations participants were in being slow or intermittent.

Unfortunately, I was not able to do any local, in-person usability tests due to snow and other troubles.

This may actually be showing my bias from having done psychology graduate work: all our participants were in-person.

Third, transcription of interviews and usability tests are _annoying_ and really time and brain-power consuming. I knew this already, from my work with video and audio of people’s interactions with robots and with each other.

On the plus side, interviews and usability tests have less content to deal with, since I don’t need to identify and describe every gesture and every word spoken. Nor do I need to parse through 32 different recordings to try to find and appropriately label the right data to plug into statistical software to find patterns.

Fourth, Git and github and pagure have a higher learning curve than I’d like. This is not helped by the need for ssh keys in all sorts of places. I still wish it were possible to put my public key in _one_ place and have all the tools needed in Hubs work use it. A lack of communication between tools is a very common problem in all sorts of industries, and not just around ssh keys.

Fifth, having my internship include Xmas and New Years early on meant that I was rather less productive than I’d have liked around then. I needed a fair bit of guidance at a time when people weren’t around. Annoying, but not awful.

In summary

Good program, A+++!

Seriously, I’m glad Outreachy exists in both a theoretical ‘getting more diversity into open source’ sense, and in a practical ‘this was fabulously useful to me’ sense.

I do wish I could see this project through to fruition. But alas, that is not how Outreachy — and many other internships — works.

Now, to put this project into and otherwise update my portfolio!

(As a reminder to myself and others: the ‘story’ that people talk about when creating portfolios is a combination of providing context for the photos and graphics and screenshots you include, and showing what you have done vs what others did, what you were trying to accomplish, and your thinking about it.)

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-03-08 18:57:06

Below is a code I used to monitor rabbitmq queues. We were  using a microservices architecture. In our architecture each services are communicating using a rabbitmq broker. Services tend to publish message to broker and the broker  forwards these messages. One bottle neck is in this communication part.Due to some memory issues the consumers that consumes these messages got hang and the messages in a particular queue went too high .So I decided to write a script to sent an SMS and alert in slack channel whenever the message count becomes greater than the threshold.Note that I have used a gem slack notifier link: https://github.com/stevenosloan/slack-notifier. You could also just curl

 require 'net/http'
 require 'uri'
 require 'json'
 require 'slack-notifier'
 require 'sevak_publisher'

 CONFIG = YAML.load_file(File.join(__dir__, 'rabbitmq_config.yml'))
 def monitor_rabbitmq
   rabbitmqctl_url = CONFIG['rabbitmqctl']['url']
   rabbitmqctl_user = CONFIG['rabbitmqctl']['username']
   rabbitmqctl_password = CONFIG['rabbitmqctl']['password']
   uri = URI.parse("#{rabbitmqctl_url}/api/queues")
   request = Net::HTTP::Get.new(uri)
   request.basic_auth(rabbitmqctl_user, rabbitmqctl_password)
   req_options = { use_ssl: uri.scheme == 'https' }
   response = Net::HTTP.start(uri.hostname, uri.port, req_options)  do |http|
   queue_details = JSON.parse(response.body)
   queue_details.each do |queue|
     output = { name: queue['name'],
                messages: {
                  total: queue['messages'],
                  ready: queue['messages_ready'],
                  Unacknowlged: queue['messages_unacknowledged']
                 node: queue['node'],
                 state: queue['state'],
                consumers: queue['consumers'] }
      if output[:messages][:ready] > 100
         sent_alert_slack("RabbitMQ QUEUE High! \n #{output[:messages][:ready]} :\n #{output}")
   def sent_alert_slack(message)
      notifier = Slack::Notifier.new CONFIG['slack_settings']    ['notification_api'],
                             channel: '#rabbitmq-monitoring',
                             username: 'notifier'
      notifier.ping message
    puts "\n", Time.now
  rescue => e
    puts "Error: #{e.message}"

Tessy Joseph John | tessyjohn | 2017-03-08 12:47:48

In these past weeks I started with an iptables translation.

We know that nftables is here to replace iptables, so it is natural that many of its users have their preferred rules set with iptables, and would appreciate to have an easy way to set a similar ruleset using nftables, for this iptables-translate is provided.

Using iptables-translate is very simple, you just need to write your iptables rule and it outputs a similar rule in nftables if the rule is supported, if not then the iptables rule will be printed. An usage example:

$ iptables-translate -A OUTPUT -m tcp -p tcp –dport 443 -m hashlimit –hashlimit-above 20kb/s –hashlimit-burst 1mb –hashlimit-mode dstip –hashlimit-name https –hashlimit-dstmask 24 -m state –state NEW -j DROP

Translates to:

nft add rule ip filter OUTPUT tcp dport 443 flow table https { ip daddr and timeout 60s limit rate over 20 kbytes/second  burst 1 mbytes} ct state new counter drop

The above example comes from the module I wrote the translation to, hashlimit, it’s similar to flow tables in nftables. Each module is translated separately and the code is in its iptables source file, much of the supported features have their translation written but some still need some work. Writing them is an actual nftables task in this round, future interns, go and check the xlate functions in the iptables files, it can be of great help to the community and to yourself 🙂

After this task I looked into the JSON exportation of nftables ruleset, in the future importing a ruleset via JSON should also be possible, but for now only exporting is. This feature is still being defined and many changes are happening. What I did was to complement a patch to define some standard functions and use them to export rules. JSON in nftables is a little messy, probably it will get more attention soon.

Now about misfortune, last week an accident happened and my notebook is no longer working, I’m trying to have it fixed but it stalled my contribution with patches. Hopefully next week this will be sorted and I can finish some patches.

I’ll probably write a new post about my experience with Outreachy soon, now it is late and I need to go home :), see you.

Elise Lennion | Elise Lennion | 2017-03-08 00:43:16

Outreachy ended yesterday, so I’m working on cleaning things up for others to use.

I have completed my summaries of the initial interviews for event creation/planning and ambassadors as resources. I did not manage to translate the CSS from table to div, as things were behaving very oddly when I tried. However, I did pass along the CSS/HTML work I had done to Máirín Duffy.

Mo also has access to all the recordings of my interviews and usability sessions, the survey which ended up with 140 responses, and the MyBalsamiq instance for the Fedora Design team in which I put my mockups. I have put the anonymized transcripts and spreadsheet for the usability testing into the User Research and Analysis ticket shortly.

I hope to use the travel expenses for Outreachy to attend FLOCK in Cape Cod this summer.

Usability Testing

Here, I’ll summarize how the usability testing went overall, what sorts of things I found, and some of the things that Mo and I discussed in our analysis. Due to time restrictions, we did not make it through everything I found for analysis purposes, but I will be available for questions and clarification as needed.

As I said in my last post, I ran remote usability sessions on my prototypes with 5 people. The major time sink for this was creating transcripts, although mostly not word-for-word. As I went, I highlighted the things that seemed relevant so that they would be easier to find when I was summarizing the findings. I ended up using a spreadsheet to organize the findings, first by prototype, and then by content to make reviewing it easier. For more on this, please see the attachments to the research and analysis ticket.

I then met with Mo to discuss my findings and to start preliminary analysis of them. We initially focused on the things that multiple people reported, with some side conversations around related problems when necessary. In many cases, the decision on what to do was pretty simple (often because it was effectively a paper prototype and thus not as interactive as it could be). However, some of the problems my participants ran into were not simple fixes and required a lot of discussion.

One of these related to the problem of deciding who to contact first in a filtered list of people. As it was, the prototype did not show anything about your relationship with those people. Even when you clicked on someone, it was not obvious that the hubs and friends listed started with the ones you had in common:

Do I know Jen Smith better than John Holsberg? Who knows?

When I asked people to find someone to contact, whether because they were visiting an area and wanted to find local Fedorans to meet up with, or because they needed more information about an upcoming event, they had trouble deciding who to talk to. In this prototype, only one person was clickable (Jorge), but my participants had no idea if that was who they actually wanted to talk to.

Affinity Mapping?

Some suggestions people made included having something signalling if they have an existing relationship with the people in the list. In talking with Mo, however, that can get very complicated, very quickly.

We can decide that people who are following each other are friends. However, what if you think someone is fabulous, but don’t need to know about their activity within Fedora, or don’t have the time to add that to what you are paying attention to. At the moment, following someone means that you see what they are doing in your stream, so perhaps mutual following is insufficient information.

Maybe you’re on a team with someone so you’re theoretically interacting with them regularly. This is more likely to be useful information about how you are connected to that person, except possibly if you’re on a huge, geographically disperse team.

Maybe you talk to someone on IRC regularly. That probably means you know them, right? Maybe work with them? Sure, but that may not mean you want to meet up with them.

It absolutely would be helpful to Fedora members to know their relationship, if any, with people they are considering contacting. Precisely how to do this remains to be seen.

Supposing we use team affiliation, following, and conversation frequency. How do we signal this kind of information at a glance without making the information too overwhelming and difficult to process? Icons can be good _if_ they are some of the very few that are quickly understood. Text can get unwieldy. This will have to go unanswered for now, but requires more thought.

Online Status

It’s helpful to know if someone is online right now. It’s probably not as helpful to know precisely when someone was last online when you’re looking at a long list of people.

Lots of applications show that someone is online using a simple blue (or green) dot near their picture or name. This may be the best way for us to do the same, and will also free up some space in the search results for affinity signalling. At the same time, we probably want to make it clear who hasn’t logged in recently. Since this will be pulled from a list of people with FAS account, there may be people in the results who have never logged into Hubs, and we need to show that somehow. We can easily show more precise information about when someone was last online in the dropdown, and from the perspective of too much information, that definitely makes the most sense.

Who wants to meet strangers?

Another frequent problem was that my participants didn’t especially want to be contacting random Fedora people out of the blue. Most said that if they knew someone liked meeting strangers, or were interested in helping others out in some way, this would reduce that barrier to contact.

So, instead of having the ability to select only ambassadors — who may or may not actually want to be contacted right now — make it possible to select people who are interested in meeting new people or answering questions or otherwise welcome random contact. Of course, what this needs to be called is an entirely different question. It might be as easy as an ‘Open/Closed’ sign like some businesses have, but that may be too easily misinterpreted or daunting for other reasons. More research is definitely needed here!

What do the search boxes accept?

In looking at the major search and filter areas of the list of people and events, it can be difficult to determine what is valid.

People search and filter
Events search and filter


When asked to search for people in Berlin, many tried to use the ‘all people’ search to look for Berlin. The actual intent is that they can change ‘you’ to a specific location other than where the system knows they are. The ‘all people’ search box is for searching by people’s names, nicknames, IRC nicks, and email addresses.

Similarly, when asked to find events in Las Angeles, the ‘all events’ search box was very tempting. Again, the ‘you’ box is there to allow you to change the location away from your own location.

So how does one make this clearer?

We had a few thoughts on how to best handle this one. Mo pointed out that a common pattern is that searches are along the top, and filters are along the left side. In that case, why not let people search by location in the search box?

We also considered that this might be a lot clearer if it were possible to actually type into those fields: once you start typing, type-ahead would quickly make clear the sorts of things the search box was expecting.


Another, similar, problem was that the contents of ‘10’ miles and ‘you’ (different for events and people) were somewhat unclear on what they were able to take as inputs. My mockup had dropdowns for these, since we wanted to not only allow people to start typing in those boxes, but also to make clear the kinds of things that were possible in there. However, in no case did people realize that they could simply start typing, rather than only selecting from the dropdown options.

events near [place] or in [place]

You can pick near ‘you’, near a specific location, or within an area. The latter case wouldn’t need the ’10 miles of’ piece, though.

People near [somewhere], including everyone

I wanted to have the option of everywhere because otherwise actually specifying something in ‘all people’ makes no sense. At the same time, perhaps you _do_ care where. This one confused me and I failed to explain my reasoning to Mo.

Events or people within [miles of somewhere] or [a place]

How far from the place you specified? Or, maybe you just want to say within a place?

This is likely an even more complicated problem than I suspected, but hopefully it was largely due to the fidelity of my prototype.

But what do we do?

We did not come to an agreement on the best way to handle either of these cases, in part because we suspected that this was a problem with the fidelity of the prototype.


Some of the problems I ran into were fairly easy to solve. Others need additional investigation and consideration. There are findings from the testing that have not been reviewed, and I hope that Mo will be able to find the time to go through them and contact me as needed.

I have no question that the usability testing was valuable, and hope that the existing team will be able to continue the work that I started. I wish that I were able to continue to work on this project, and shall see if there is any time available while I apply for jobs and hopefully start a full-time job soon.

I found this to be a fabulously useful experience, and want to thank Máirín Duffy and the Fedora Hubs team for being helpful, approachable, and friendly. And of course, to thank Mo again for having mentioned the possibility of Outreachy to me.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-03-07 21:14:45

The mockups created to illustrate my assumptions were tested with users. Although my initial goal entailed testing the Contributor page, early research indicated that various aspects throughout the website could better support newcomers. For this reason, solely soliciting feedback about the Contributor page during testing would’ve felt like a missed opportunity. For example, if a newcomer lands on the website they may overlook the Contribute link in the footer. Therefore, I wanted to see the path users took to navigate to the Contribute page when links to it were included in the top navigation and the body of the homepage.


Four representative users similar to the personas were recruited to review the mockups. A script outlining the process and scenario was read to each participant. They were then observed while completing 6 tasks and answering follow-up questions related to each activity.

Activities and questions were structured around important goals for newcomers, which include:

  • Learn about contribution opportunities
  • Contact the project to ask a question
  • Identify members behind the project
  • Locate documentation
  • Join the mailing list

Initial Impressions

Overall, first impressions were favorable. Several participants commented about the simplicity and clarity of the website.

When asked if they thought their questions would receive a quick reply, a participant commented about the conciseness of the website.

“I think so. The site looks put together, looks clean and succinct.”

Another liked the warmth of the site’s colors as well as the overall simplicity.

“It’s very clean and to the point—how it works, 1,2, and 3—there’s not a whole lot of verbiage, so it gives me exactly what it is that I need. The objective is laid out in a succinct way.”

When first looking through the site and asked to give their overall impression, one participant commented on a revised heading.

“So you can help them improve the web…so pretty much anyone can do this. Pretty simple.”


Career Advancement

As mentioned in a previous post, many newcomers join an open source community to advance their career. When asked if they felt joining this community would help their career, several participants answered that it would, if they were a software tester.

“My career? Maybe not. But if people wanted to start a career as software testers, I’m sure it would.”

On one hand, this could mean further clarification is needed in explaining that contributions to develop Webcompat.com and tools are welcome. However, it’s also an indication that those currently working in QA or perhaps those who aspire to be SDETs would be good people to recruit. Perhaps guest blogging on or outreaching to testing blogs would be a good way to spread word about the community.

Social Media Icons

When asked if they thought their questions would be answered quickly, another participant decided to explore the site’s Twitter profile. Although the Twitter link in the footer was quickly located, the link directed to a profile landing page instead of the full Twitter profile. I hadn’t initially thought much of it, but the user assumed the account was less active than it actually is, as only a few older tweets were displayed. This was a quick fix, that’s already been made.

Screen Shot 2017-03-06 at 8.45.06 AM.png

“Last post was 6:35 am Jan 30—so no, I would say they would not respond quickly. And then I’d have to take my chances with email—I would probably not even try to ask them on Twitter.”

Contribution Page

Opinions about the newly drafted content for the contributor page were mixed. Two participants liked the clearly defined roles.

“What I do like is that it clearly states what it is—bug reporter, issue triager…and in it, which I really love is that there are links which take me directly to it, it doesn’t just explain it.”

However, one user thought the page was too text heavy–a concern that’s come up before.

Screen Shot 2017-03-06 at 8.47.41 AM.png

“I think it’s so much text—like there’s so much you have to read.”

Also, it may still be unclear that the site accepts contributions outside of the bug reporting process–particularly when quickly scanning the page. Two users commented that there was some initial confusion around how to contribute.

“It’s not straightforward on what various opportunities I could be engaged in, other than finding bugs.”

“Contribute [in the nav] is a bit misleading—it seems like there’s an opportunity to contribute to this particular website. Like to do some coding, or to help with this particular website. But I guess this just means like reporting bugs. I’m not sure it should be telling you about bug reporter. Like if it’s the part of contributing. It should tell you only about contributing—I’m not sure.”

With this in mind, an alternative contribution page was mocked-up, with the goal of visually separating the types of contributor roles and breaking up the text.

Screen Shot 2017-03-06 at 8.49.34 AM.png

Two participants expected a concrete way to sign-up for each role. While looking under the Site Contributor role, a participant selected one of the links and expected to find more information about the role.

“At first it seems like a way to contribute to this website–but then again there’s a link to filing an issue.”

Directly linking to GitHub on contributor pages is common in open source communities. However, it might be worth giving newcomers additional support. This could entail detailed pages for each role, or call to action buttons that allow interested users to “sign-up” with their email. Users could then be sent a welcome email with more information and be directly connected with a mentor.

Success Story

When asked if they felt they’d be appreciated if they were to join the community, one participant suggested highlighting community members on the homepage.

“Maybe [add to] the front page—the best contributor of the month. Maybe some incentive to be the best contributor of the month—even if it’s just 10 dollars.”

Most of the participants seemed to skip over or simply not notice the success story on the contributor page. Changes to the mockup were made with the idea that it would better stand-out if it took up more space.

Screen Shot 2017-03-06 at 8.52.04 AM.png

Get Started Guide

We questioned whether or not it would be ideal to add the how-to guides to separate pages or keep them in tabs underneath the contributor roles. Two of the participants interacted with mockups that kept the guides in tabs and the other two interacted with guides that were placed on separate pages.

When asked to locate documentation or instructions, both groups easily found the guides. Therefore, I don’t believe adding the individual pages will frustrate the users or cause them to drop off.

Screen Shot 2017-03-06 at 8.55.53 AM.png

Overall, the layout of the how-to guides was well received.

“I like the graphics and the bullet points that walk you through the process. It’s very visually appealing. I’m not overwhelmed. Easy steps laid out.”

One user suggested videos before being asked if they’d like them. And the other three agreed that a video would be helpful.

“It’d be nice to have a video on exactly what to do—cause I’m more of a visual learner—visual, hands on.”

I think the feedback gathered from users will prove helpful in deciding what content to include in the video.

Contact Page

“Seems like there’s not a real way to contact people.”

“I’m not sure if it’s ways of contact, it seems like ways of getting information or news updates.”

Screen Shot 2017-03-06 at 8.58.56 AM.png

A contact page was mocked-up with several options for reaching out to the community. The overall impression from users was that it wasn’t really a contact page. When tasked with finding a way to ask a question, two participants were unsure about the use of IRC.

“Kind of have no idea what an IRC is—what I would like though is a live chat, even though it might not be 24/7, but a certain time of the day you can talk to someone.”

“Most people I would suspect don’t use IRC, so I don’t see a way to ask a question here.”

One user felt that the given options of contacting the community lacked privacy.

“Twitter is quite public.”

When asked their preferred method of communication, two users suggested email. A contact form or the email of a single point of contact would likely meet users’ needs and expectations of the page.

Contribute Links

Two additional links to the contribute page were added to the mockups, as it was assumed the one link in the footer wasn’t enough. Participants most often selected the Contribute link in the top navigation, and one selected the link in the ‘Help Us Improve the Web’ copy.

Report a Bug and Issues Links

On the homepage, I added more context to the Report Bugs, Diagnose Bugs, and Reach Out to Sites links. My initial assumption was that the current links, without context, may confuse newcomers. Three users read the text and one user expected more information after clicking Diagnose Bugs.  One user didn’t read the added text and had no expectations for where the links would take them.

“I’m the kind of user who doesn’t like to read the text—I just like to click and see what’s in there.”

About or Team Page

Three of the four users were able to quickly locate members of the community by navigating to the about page. Initially, I was unsure that this was where team photos belonged. One user, with a similar thought, had a difficult time locating the members behind the project.

Screen Shot 2017-03-06 at 10.48.52 AM.png

“I would’ve gone to ‘About’ to find out more about the community itself—not so much the members of the team.”

Clarifying this could be as simple as changing the page’s name from “About” to “About Us” or presenting the community as a completely separate team page.

Mesha Lockett | Outreachy UX Research and Design | 2017-03-06 20:03:52

Securing the Linux Kernel by patching even its smallest vulnerabilities is of high importance as these small things can cost a lot. One such area that is exploited regularly is a structure in read-write mode. According to my mentor Julia it is good to work on something small and focused but at the same time which is of high importance. And I totally agree with her because these small steps will make a huge impact to the Kernel security in long term. It’s also quite motivating to get support on this from Kees Cook and Greg as they along with many others consider this issue of great value.

It was Julia’s idea to see how the kernel has changed since 3.10 and the figures are quite impressive. Our long term goal is to see the two graph lines overlap.

  1. Total vs const non-array structures for Kernel versions 3.10 to 4.10.

2. Total vs const array of structures for Kernel versions 3.10 to 4.10.

3. Percentage of const structures for kernel versions 3.10 to 4.10.

Julia has already constified a lot of them and majority of my patches are lined up for the next released i.e 4.11. I am curious to see what impact will my patches make on the stats for the 4.11 version.

Bhumika Goyal | Stories by Bhumika Goyal on Medium | 2017-03-06 08:27:35

This week my internship with Zulip and Outreachy officially ends. I don’t intend to stop working on Zulip, one of the reasons I applied was because it was something I could see myself continuing to contribute to. I’ve learned new technologies, applied well-practiced skills, met new people, and explored things I knew would be a challenge. I’ve used this blog to talk about what I have been working on, but I haven’t said much about why I applied. I wanted to focus on the work at hand.

So now would be a good time for that. But first, a little background.

I’ve known about Outreachy for a while, I briefly considered applying back when it was still Outreach Program For Women. At that time, I was a maintenance engineer — fixing software bugs for a medium-size tech company. I had been there a while and was thinking about different directions I could go in my career.

I started out on a pretty typical CS path, with a degree and jobs on engineering teams. But things rarely go like you plan, and eventually I landed in support and maintenance. I was writing code, but I wasn’t doing what many of my engineering peers were with automated testing, cloud services, and iterative Agile development. I had looked at some open source projects as a way to try new things, but few seemed approachable. And the timing for OPW was inconvenient.

Instead I joined a tiny, tiny startup. I began comfortably in C code but rapidly picked up anything in the product that needed to get done. Server features, network problems, mobile clients, monitoring, you name it. I wrote JavaScript and fixed Android bugs. I did all kinds of things I knew nothing about. Some of it stuck, and some I still can’t explain what I thought was going on.

But the company didn’t go far, and I found it complicated to talk about something where little of my work was visible and the code proprietary. I have a hard time with portfolio projects because I can’t stay excited about an abstract problem solved in a theoretical vacuum. I’m much more interested in how interconnected parts work together, and that’s not something that shows well in a 30 second demo.

I knew Outreachy was not just for students, although mainly students and recent grads apply for it. It’s the nature of the thing: if you are an established working professional, it’s hard to take off a bunch of time to try something new. If you have other responsibilities besides work, doubly so. But I was able to, and saw it as an opportunity to explore a new area and build a visible record of my work. It’s an excellent professional opportunity, one that I’m fortunate to be able to consider. Even better that it improves open source software in the process.

There was one little word, however. “Intern.”

I’ve been an intern before. My school strongly encouraged all engineering students to do two “co-op” semesters, and I did. (I wrote documentation for a software company.) But as a middle-aged professional, sometimes when I mentioned I was applying for an open source internship program I’d get a funny look and a one-word response: “Why?” Wasn’t I a career software engineer already? I’d explain that it’s an opportunity to move into a new area and I’m excited about the possibilities and then everyone understood. But it was awkward. I was already questioning a culture where “rockstar” new grads land huge compensation packages and experienced engineers struggle through interviews about abstract CS theory. So, yes. Awkward. I had to think about that to be comfortable with it.

The application process was challenging, not only because I was learning a new codebase and new tools, but because I had to prepare a proposal for something I knew almost nothing about. I approached it as I would a professional task: spec and estimate new features appropriately scoped for a 3 month deadline. And how would I know what was reasonable? I had no idea. Yet, the experienced people answered my questions and encouraged me to build a solid but flexible plan where the schedule and tasks could be revised later. That was good to know. I was excited to learn I was selected. I was paired with two mentors, Sumana Harihareswara, already an active Zulip contributor, and Tollef Fog Heen, who has experience with services and APIs.

I knew I had signed up to do a lot of engineering work, and was confident I was able to execute to plan (for some value of “plan” at any rate.) There were new things to learn and a new codebase to become familiar with and all sorts of stuff that you deal with again and again when changing jobs. And this was a job, it was a full-time commitment over the course of the program. I wasn’t too concerned about that part.

The other things I would learn, I didn’t really know. Not in a “I have no clue” way, but more in that every new environment has things that come up or happen in unexpected ways. One new part in this was the open source component. I’ve worked on plenty of engineering teams, generally there is an overall design and individual areas are parceled out to developers or small teams to refine and implement. There are many decisions to be made, but most of the big ones are (hopefully) at least sketched out in some kind of architecture plan. Often lead engineers have strong opinions about how and why and where.

My few interactions with other open source projects suggested that outside contributions were a nice thing as long as it wasn’t too taxing for the core team. Clearly this was a different situation and I wouldn’t be left to my own devices, but it took some time to sort out where I was comfortable between working mostly on my own and seeking input beyond basic questions. After all, everyone was busy working on their own tasks, usually between other responsibilities. I was adding new functionality rather than working in an already established area, so I was unlikely to break a core feature. But I wanted it to fit with established standards and match overall goals. This was an area where my mentors were especially helpful: how often to ask busy people for feedback, what sorts of things are generally left to individual developers to handle.

Something I didn’t consider at first, and well into the program really, was learning as a specific goal. Of course, learning was a desired outcome: new skills that can be applied to other projects. Yet I’m accustomed to the task being the focus, and any necessary learning adjunct to that. I discounted the value of the effort I was putting into understanding new tools and environments, and sometimes frustrated about my productivity. Was I hitting milestones fast enough? Sometimes chasing down problems made me question whether I was accomplishing anything meaningful. But then, in conversation with my mentors, I realized that was the point.

The biggest surprise over the course of the program had nothing to do with code. I’ve always been a strong writer, but I am best when I can edit and revise. Sometimes speaking to people face-to-face is challenging, but there is enough room in the back and forth of a live conversation that I can get my point across most of the time. (Stressful situations less so.) Zulip is a group chat system, so I was hardly surprised that I was going to spend a lot of time sending short messages back and forth. At a modest pace, this isn’t a problem.

What I was entirely not prepared for was having status meetings in chat. Attempting to convey complete thoughts about where I was on a task while at the same time tracking questions asked about multiple things was extremely difficult. It was like having an important conversation in a loud room, where so much cognitive effort is required to parse the words that there is little space left to compose a response. Chat is such a central part of the project that I kept trying until everyone was clearly frustrated. It took a phone call to sort things out, and then we agreed to have status reports by email. Any needed discussion can be handled in chat, but most of the information was already provided. That entirely changed the regular meetings from something I struggled to get through to an orderly sharing of information.

There were many other things besides the technical tasks originally in my plan. At the suggestion of my mentors (and to no great surprise) I was encouraged to submit a talk to a conference. It was just a few days ago accepted, so now I can continue on and actually write the full presentation for the event in May. I added career tasks to my plan like updating my resume and attending community events.

The visible github activity will certainly be an advantage when looking for my next job. I’m happy to have found a project I enjoy participating in and now I have several complete features I can show as code samples. I expect there will be more.

Andrea Longo | Feorlen's Other Blog | 2017-03-06 06:12:20

surya’s been helping me wrap my brain around vue/vuex so i can do more coding work on herbivore. i filed what i thought would be an easy issue in order to get started:

in my head, i was like, “oh, change the css :hover styling and call it a day.” did not take me long to discover that: no. u see, we’re changing the ui dynamically based on information we’re getting from the network. vue and vuex let you manage all this information by setting the starting state and then defining stuff you want to happen based on new information you get (with getters), actions you do, and mutations that occur to the state based on those actions. what follows is a walk-through of my process, written in present tense even though it’s already done because tense is confusing sometimes. here we go.

what i want to do is: based on which node’s mac address matches my own mac address, change styling to show who i am in relation to other nodes.

i start with the Network.vue file, the vue component that defines what table values you see when you use the network tool. this, reasonably, lives in the app’s ‘components’ folder. we already have some styling here that says “if the node is active, style it with a blue background and white text.” i use the same logic to say “if the node is the homeNode (me), style it with black background and white text.”

gotta add the css to correspond to the .homeNode class i just created:

cool! but none of this is connected to anything. how do i trigger node.homeNode to be true and therefore highlighted black? i have to register the change in state from not-homeNode to homeNode. to do this, we have to make the homeNode property part of the initial state. this happens in a file called network-info.js, inside the ‘store’ part of the app (as opposed to the ‘components’ part of the app where Network.vue lives). the ‘store’ is where you say “this is the default behavior of these properties; any changes to that default have to be committed and updated as needed.” okay, so the initial state of homeNode, within network-info.js:

in order to get the state of the particular homeNode property, we create a getter:

then we say “the action you can do to change the homeNode property is: set it with a function we’re naming ‘setHomeNode’. you commit this change to the state object based on the function we pass in.” in this case, the function is called SET_HOME_NODE. this is the mutation function and it’s defined further down the page, in the ‘mutations’ section.

the mutation that happens upon acting on homeNode is defined in the ‘mutations’ section:

it says:

  1. for each of the nodes listed in the state’s ‘nodes’ array,
  2. if the mac address is equal to mac address you pass in,
    1. then, the node you passed in is the homeNode!
    2. and the homeNode’s state is no longer null (which is how it was defined in the original state object).
  3. otherwise,
    1. the node you passed in is not the homeNode.

cool! one more tricky thing and then we’re done.

part of initializing the network info in the mutation ‘UPDATE_NETWORK_INFO’ is defining some properties of the router’s state and the user’s (my) device’s state. in that mutation, we have to add homeNode as a property and set it to ‘true’:

woohoo! now, when we’re in network view, my device in the table view has different styling from other devices:

next, gotta style the home device icon to match…

shoutout: these vue.js tutorials were non-trivial in getting this tiny but important change implemented.


Jen Kagan | ITPPIT | 2017-03-06 02:26:55

I attended the Linux Foundation's Embedded Linux and OpenIoT Summit in Portland, Oregon, February 21-23, 2017. My travel was sponsored by FOSS Outreachy and the conference registration fee was covered by a diversity scholarship from the Linux Foundation.

The Linux Foundation has posted video and slide sets here.

As an Outreachy Alumni & Mentor, I'll make my diversity & inclusion observation first:

While I felt in the majority as an older attendee, I was certainly in the very small minority as a women.  In some presentations I was the only women in the audience.  I noticed that the posted photographs of the conference contain a lot of women.  You could probably take a headcount of the women present using those photos.

With respect to inclusion, ie. did I feel included during the conference?  Generally, yes.  However, my note to future self is to bring something to share at the technical showcase.  Giving a presentation is certainly a way to get more involved, but I'd couple it with a table at the technical showcase for the best experience!

Here are a few of my favorite things...

Favorite Keynote: Sarah Cooper, GM of IoT Solutions at AWS

Sarah spoke of "Making Experimentation Easy" by applying similar methodologies that have enabled cloud software's rapid rate of innovation to embedded devices.

The conference photographer snapped and posted this photo of Sarah.
Although she was triumphant in her presentation, she did *not* actually
drop the mic!

Two take-aways from Sarah:

1) "Limit the blast radius"

Sarah was referring to carefully selected beta customers and failing fast. The complete quote is: "To fail fast you have to know you have failed, limit the blast radius and shrug it off."  Great principle to apply to everything you do!

2) "You should absolutely come straight to the GM with your resume!!"

This quote came in a private email from Sarah where she responded to my query about job opportunities at AWS. Her assertive "absolutely" resonated with me. As I sort through my contacts from my earlier years in the industry, I tend to skip over contacts that have risen high in management or technical ranks. Sarah's *absolutely* spurred me on to reach out to those old contacts and to make more new, cold contacts.

2 Things that made me go hmmm:

1) There are developers who believe that user space drivers are more efficient than kernel drivers. They don't mean more efficient as in they don't want to bother upstreaming the driver. They actually mean more efficient in that they think the user space driver performs better.  I didn't meet one of those developers...they were only whispered about ;)

2) Android Things (Intel & Google product) will not use our lovely IIO drivers. They will not include any non-essential drivers in the kernel image and there will not be a way to rebuild it. They have set up a git repository for the world to share user space drivers for sensors. 

Favorite Presentation:   Android Things & Android Things Deep Dive

Intel: Anisha Kulkarni, Geeta Krishna, Sanrio Alvares
Yes, even despite the aforementioned IIO driver exclusion.

Favorite PresentationVoice-controlled Home Automation

IBM: Kalonji Bankole,  Prashant Kanal
Demo'd a complete implementation using serverless framework (OpenWhisk) and cognitive services (IBM Watson)

Favorite Sponsor Demo: runtime www.runtime.io

Favorite Sponsor Swag: SUSE  I now feel guilty that I don't use their distro.

Favorite Sponsors: Intel & SUSE & Linaro

Intel's staff at the booth were so knowledgeable I just assumed they were from a development group. I hope they were flattered, and not insulted when I asked. They were marketing.

SUSE's Patrick Quairoli shared insights on development at SUSE.

Linaro & 96boards: Lost his name, but he gave me a very patient tutorial on each board chained to his booth.  Also got lots of stickers to brighten up my clamshell!

Favorite Sponsor Hangout:  Kodi
Felt like home.  A too big TV with too many channels!  Super nice group of developers!

IIO Community Sightings: Matt Ranostay
Matt gave a great IIO Subsystem presentation!!! It contained a live demo of one of his more recent driver additions: heart rate and pulse oximeter. When Matt's heart rate only measured 42, he wrote it off to a loose connection, but I'm not convinced. I may go look for a bug in that driver ;)

IIO Community Hindsight: David Lechner
When I met David displaying his ev3 devices at the technical showcase, I didn't know of him from the IIO community. David has some drivers he wants upstreamed to IIO.  Potential Outreachy projects?  I'm fuzzy on this. Can we add support for sensors that basically have no datasheet, but that David has reverse engineered?

IIO Community Hindsight: Jason Kridner
Jason gave a presentation and also showed off some beaglebone devices at the technical showcase. Jason noted in his presentation that he'd like to see additional sensor support in IIO.

Alison Schofield | Linux IIO Notes | 2017-03-05 20:52:25

When I applied for my Outreachy project, I read in the description that it was a collection of smaller projects, and the smaller projects can be added or removed along the way. I resolved to not let anything be removed, and to complete at least those three that were in description.

And I did it! Well, almooooooost, because the badges will need some tuning, and one day is all I have left of my internship. I’ll probably use part of my Sunday to catch up a bit. I plan to continue making contributions to TaskCluster (I have 1.5 hours a day scheduled), so finishing later is also an option.

Anyway, to the badges themselves: crawling GitHub, I got really sick of all those status badges that look as if they came from Identical land. So I had this idea of designing custom badges for TaskCluster, that nobody else has. Nothing major, really. All the GitHub badges look so much alike that coming up with something original is as easy as putting Emojis on the darn things :)

TaskCluster error badge

Nice, don’t you agree? The feature is already live, but I plan to make some adjustments to the size, so that TaskCluster badges aligned nicely with the badges from Identical land (they should be put together so that anybody could fully experience how superior TaskCluster is even in badges). I also will make them clickable, so you could view your task group in TaskCluster.

Irene Storozhko | Stories by Irene on Medium | 2017-03-04 19:52:22

It’s crazy how fast the past 3 months have gone by, but I’m in the final week of my Outreachy internship with the Wikimedia Foundation!

What I’ve been up to

The week after the Wikimedia Developer Summit, I spent 3 days working from WMF headquarters in San Francisco. It was great to meet and work with some of the team in person, mainly my mentor, Tilman Bayer, a senior analyst at WMF. The office is spread across a couple floors with lots of meeting rooms and open spaces for people to collaborate.

Presenting at WMF about my journey into data analytics

I gave a talk at a PyLadies San Fransisco meetup which was hosted by WMF about my journey “Becoming a Data Analyst”. About 70 people attended and this was the largest crowd I’ve ever spoken in front of! I didn’t know until just before the presentation that it would be recorded, but now I’m glad to be able to review my presentation skills. For having almost no prior public speaking experience, I’m happy with my talk. However, there are a couple things I want to work on such as not swaying around as much and limiting the number of “uhms”. I’m also glad that it was recorded so that I could help others by sharing my experience learning data analytics, which was one of my main goals when I started this blog. I wanted this talk to be useful for those who aren’t even sure where to begin learning and it was really gratifying to have people come up to me afterwards (and later random people message me on LinkedIn) and say it did exactly that.

For the past few weeks, I’ve worked on developing and exploring a new privacy-friendly retention metric to help WMF understand how often readers visit the site. This metric utilizes an existing instrumentation (introduced fairly recently for the unique devices metric which launched in January 2016) to calculate statistics for the timespan until WMF sees a device (desktop or mobile phone) return to the site. It’s been interesting work and I’ve learned a lot about web analytics. I’ll have a separate blog post about this project soon.

I also started work on a new privacy-friendly engagement metric for WMF by vetting data quality of the new table for this metric, which has already been useful to the developer and product manager of this product team. However, this project won’t be complete before the end of my internship, so I’ll likely have to hand it off to the team. This project aims to understand how long readers engage with the website by measuring the time a reader has the site open in a browser window.

Reflections on the internship

Overall, I think the Outreachy program is fantastic and would highly recommend working with WMF (by the way, applications for the next round are now open). I felt that my projects were useful to the community and at the right technical level for me. I was never bored with my work, but I also wasn’t totally overwhelmed with the technical complexity. In Python, I gained a deeper understanding of the Pandas and matplotlib libraries. In SQL, I learned a lot more about aggregate functions and nested subqueries. I also became familiar with Hive and Hadoop.

One part of the internship which I enjoyed was working with global data. There are currently 295 Wikipedia language editions. Although I didn’t interact much with members of the global WMF communities, it required me to think on a global scale. For example, while working on the article sections heading project, I learned how other language editions refer to similar headings differently based on their language norms.

This was a fully remote internship. Thankfully WMF makes it easy to work from anywhere in the world by using tools such as IRC, Google Hangouts, Blue Jeans, Etherpad, live streaming meetings on Youtube, and documenting notes from meetings to keep people connected.

Four times a week I had a check in with Tilman via IRC and once a week we had a Google Hangout. This worked really well to communicate progress, blockers, ask questions and discuss projects. Outreachy only requires mentors to check in with interns twice a week, but I thought it was extremely helpful to have daily check ins. About once every other week, I had a Google Hangout with Jon Katz, the Reading Team Product Manager, to discuss broader issues outside my day-to-day activities (general questions about WMF, how we felt the internship was progressing, etc.). Although it wasn’t related to my project work, it was nice to have dedicated time for other discussions.

There are many obvious perks to working from home (mainly that I don’t have get out of sweatpants), but sometimes it’s weird to be at home alone all day. Occasionally, I worked from coffee shops, but often found the wifi issues weren’t worth it. I’ve been working from home for over a year now and to balance this out, I schedule lots of evening activities like fitness classes, events with friends/family or attend meetups. It can also be strange to work with people, but not know them on a personal level which is a natural progression when working together in person, but much harder on remote teams.

The timeline for my project work was flexible, so I was able to prioritize the most important work, but unfortunately, I won’t finish all the projects originally planned. Part of this was due to unexpected delays such as system issues and maintenance, but also tasks taking longer than planned. Some of this was deliberate decisions to add more time on projects that were going well to make sure I did a thorough job. I’ve made sure to document everything for a smooth transition and will also spend a few hours over the next couple weeks to finish additional work.

Personal Takeaways

During my internship, I identified two things I want to get better at. One is sharing my work in progress. I secretly worry that until my work is complete, since it’s not perfect, someone will judge me for this. Of course, this is silly and sharing work in progress is important to make sure I’m not making early mistakes which can affect the results, brainstorm additional ways to solve a problem, and get feedback. Since I had check ins every day, I had to share work in progress and I’ve gotten more comfortable with it. Most work at WMF is done in the open with lots of communication from others (who might not directly be working on that project) and I’ve seen how beneficial this can be to successfully completing a project.

I also realized that sometimes I obsess over small details that seem strange, but won’t really affect a project’s overall results. Data is messy and there’s almost always anomalies, but it’s important to know when to dig into these more and when my time is more valuable working on another task.

Overall, this experience was totally amazing. I worked on one of the top ten most popular websites in the world, improved my technical skills, got better at communicating and documenting my work (not something I had to do a lot when studying on my own last year), had fun and got paid to do it all! I want to thank the Gnome Outreachy program for making this possible and helping women all over the world break into open source projects, sharpen their tech skills, and gain confidence in their work. I also want to thank WMF and specifically, the Reading Team for accepting me as an intern and supporting this program. Last, I want to thank Tilman for the guidance and time he dedicated over the past few months to my internship. I’m incredibly grateful to have had this opportunity!

Final Report for Outreachy Internship was originally published in Becoming a Data Analyst on Medium, where people are continuing the conversation by highlighting and responding to this story.

Zareen Farooqui | Becoming a Data Analyst - Medium | 2017-03-04 15:03:27

Wondering what’s in store for Boston Summit, May 8-11? Apart from attending dozens of sessions led by professionals in the industry, meeting a ton of new people, and taking home lots of great swag, you’ll certainly bring home a great deal of thoughts and become capable of contributing more efficiently. As per reports, around 50 – 60 percent OpenStack summit attendees are first-timers. Being a grantee of the travel support program and speaker at the previous OpenStack Summit, I hope this piece conveys my enriching journey and distances you from feeling overwhelmed in the whirlwind of activity.

Click to view slideshow.

The every-six-month release cycle and summit gives OpenStack contributors, vendors, and users a real-time view of growth and change, making for subtle changes that can often be difficult to notice. The summit typically last four to five days. I participated in the main conference and the design summit focusing mostly on the keystone project. But before I get into my breakdown of action, I’d like to extend many thanks to all those who planned, participated, and helped to make this a tremendous event.

Pregame: I didn’t want to be stuck at the registration desk while everyone else is off to the races, so that’s the first thing I did after reaching the venue. Being a speaker I got an extra edge and in no time, I was holding my access pass along with the OpenStack Swagger T-Shirt. I then familiarized myself with the conference space using the venue map so that, I don’t get lost and reach the sessions timely. I then headed to the Pre-Summit Marketplace Mixer along with fellow Outreachy interns to mix, mingle and checkout the sponsor booths and grab some drinks and snacks. This was the first time I met my coordinator, Victoria Martínez de la Cruz and my mentor, Samuel de Medeiros Queiroz in person and my enthusiasm skyrocketed.

Diving deep into the summit schedule, I have to say that there were many tempting talks but neither it was physically possible to attend all of them, nor was it a good idea to chase everything that glittered. To tackle the issue, I planned my schedule in advance and marked the sessions I couldn’t afford to miss on the OpenStack Foundation Summit app. Thankfully, the summit organizers efficiently and timely put the videos up on youtube and the OpenStack website, which leaves only one stone unturned, ‘What talks are super important to see in real-time?’ While pin pointing the sessions, I considered both the speaker and the subject matter. Highly tactical sessions are generally useful to attend regardless of who leads them. However, sessions less directly related to one’s profession can be valuable as well if they’re led by an industry figure one is angling to meet.

On the first day, I jumped out of bed quite early to attend the Speed Mentoring session organized by the Women of OpenStack. It was a great icebreaker in getting to know experienced people in the OpenStack community. The session compromised of both career and technical mentoring. The amazing experience encouraged me to join the Women of OpenStack session the following day, where we heard lightning talks and broke into small groups to network and discuss the key lightening talk messages. A big thanks to Emily Hugenbruch, Nithya Ruff and Jessica Murillo for their valuable insights. While heading towards the Keystones session I felt overwhelmed with joy to be a part of the OpenStack community. The keynote speeches were amazing with cool demonstrations of technology, sharing great customer stories, as well as insight from vendors about what makes an OpenStack deployment successful. I then proceeded to the Marketplace for some snacks and grabbed more swag. Many companies had a “We are hiring” tag on their hoardings so, I discussed with them the ins and outs of the role as well as handed over my resume to stay in touch. Many of the sessions I attended the following days were crowded. The big takeaways were increased interest in the Identity Service and gaining practical knowledge in sessions like ‘Nailing Your Next OpenStack Job Interview‘ and ‘Effective Code Review‘. I also found ‘Pushing Your QA Upstream‘ and ‘I Found a Security Bug: What Happens Next?‘ sessions quite useful. Attending the Hands-on-Workshop on Learn to Debug OpenStack Code – with the Python Debugger and PyCharm was an unmatched experience. To wrap up, every talk I attended had something to offer, be it an intriguing or fascinating idea or a tactic that comes in handy. During design sessions, we sat together at one table and focused on the most important aspects and plans for the next OpenStack release. The topics ranged from discussions about what can be done to make code more robust and stable, to how we can make operators’ lives easier. Things ran smoothly in these sessions as everybody was in one room, focusing on a single topic. Lastly, I must plug my own foray into the conference talk schedule. I had a blast talking about OpenStack Outreachy Internships. The talk was intended for people who want to become a better community member along with those willing to start contributing or mentoring. I am deeply grateful to my co-presenters, Victoria and Samuel, whom I have known for nearly two years now.

Click to view slideshow.

But it’s not just the formal talks that are the best part of a conference. It’s also about meeting a new person who might be able to help you out with that sticky problem or catching up with old friends or having the opportunity to just geek out for a little while over a nice meal and maybe a drink or two with the people who once helped you submit a patch. It’s a good idea to contact those you want to meet before the conference. This way you’ll set yourself apart by engaging that person in a casual meeting which isn’t distracted by throngs of people. Not to forgot, almost all the sessions wind up by 6pm. So, utilize this time to roam around the city. You may even consider extending your trip by a day or two to visit some famous tourist destinations nearby.

All work and no play makes Jack a dull boy!

Click to view slideshow.

Good luck to those who are applying for the Travel Support Program for Boston Summit. Still thinking? Hurry up, applications close on March 6, 2017.

Were you also there at the Barcelona Summit? Would love to hear about your experience in the comment section below. Stay tuned for more interesting updates and development in the world of OpenStack. smiley 2

Nisha Yadav | The Girl Next Door | 2017-03-04 14:10:47

import sys

count = {}

for line in sys.stdin:
  # strip out white space
  line = line.strip()
  # split words at ' ' and create a list of words
  words = line.split()
  for item in words:
    if item in count:
      # the relationship between 'count' and 'item' 
      # is that 'item' is the key within the 'count' dictionary
      count[item] = count[item] + 1
      count[item] = 1

cat EO1-clean.txt | python wordcount.py

then, add this to get it in a pretty format:

# when you iterate over a dictionary,
# the for loop includes two variables,
# 'key' and 'val'
for key, val in count.iteritems():
  # if value appears more than once
  if val > 1:
    # print it
    print key + ": " + str(val)


Jen Kagan | ITPPIT | 2017-03-02 18:21:09

We are in the last phase of Outreachy and to be more specific,the last week of the internship. Time has flown by since my last post and I have no idea how it went by so fast. Anyhow,we have progressed a lot more since then.

Post working on the mockups in Inkscape,we started working towards usability testing of the mockups we had done so far. While it is easy to keep iterating the current designs,it is important to get feedback from real users. I was hoping to get more insight into the workflow and the elements that weren’t as obvious as we expected them to. And it worked like a charm. While working on any project, we get so invested in it and sometimes need a new unbiased perspective to move forward.

I did the usability testing with a few people I know from my college and also a couple of team-members from Cockpit. While I used paper prototypes to get feedback from the users in my college,we went with sharing images of the UI for remote testing with the users from Cockpit team. We did run into a bit of issue since screen-sharing the images of the UI would not work and we had to ultimately resort to sharing images via GitHub. I wish the screensharing worked since it might have led to easier access.


Users were asked to perform the following tasks after I gave them pre-defined scenarios:

1.Allow new ports through the firewall.

2. Close ports from the current firewall rules.

3. Control the traffic log and infer data from it.

An example of the scenario was,”You are hosting an instant messaging app on your server, but you are unable to access it from the other computers in the network. It is probably due to the firewall settings. See if you can find out the port number and allow it through the firewall.”

User Feedback:

I gathered the user feedback and have listed them in their order of frequency ( from high to low):

  1. Need to see more filters ( especially, blocked traffic) in the traffic log.
  2. Include names of common services and ports to choose from instead of having to enter the port number.
  3.  Confirmation dialogue before port rule is removed.
  4. Add access rules for particular subnets.

Implementing the feedback:

From the above listed issues,number 3 is a very obvious issue and something that should have been included in the earlier iterations too! While we are working on the mockup to add more filters to the traffic log and a “Suggestions DropDown” for the port number and services,adding access rules for particular subnets has been kept for further iterations of the UI.

The final tasks for me is to implement the UI from start to finish and perform usability testing with the working prototype to get more feedback.


( I know I always say this,but there is definitely a new post next week. Meanwhile,check Outreachy’s website for applying to their next round of internships.)


Bhakti Bhikne | Bhakti Bhikne | 2017-03-02 08:22:03

i used code from here to put all the words from the first executive order into a set, which is a way to get all the unique words in the document.

## from allison parrish's http://www.decontextualize.com/teaching/rwet/simple-models-of-text/

import sys

words = set()

for line in sys.stdin:
  line = line.strip()
  line_words = line.split()
  for word in line_words:

for word in words:
  print word

which produced “words” separated by line breaks. here are some interesting sections:









okay so obviously some of these would be neat poems, so i tried to join them:

import sys

for line in sys.stdin:
  line = line.strip()
  output = " ".join(line)
  print output

hmmmm noooo…

hmmmm nooooooo… okay, new activity: replacing the executive order with these poems.

i’m doing this manually for now since it would involve a bunch of regex, but i’ll record the steps here:

  1. replace all instances of “Minimizing the Economic Burden of the Patient Protection and Affordable Care Act Pending Repeal” with first poem above, “all United burden,” in the style in which the original text appears (so, with .title() or .upper())
  2. when sections begin, keep the text naming the section as such (“Section 1”, “Sec. 2”, etc.) but replace body of the section with the next poem above. remove newlines from poems above so the words flow like sentences, but don’t change case, punctuation, etc.
  3. fill sections for as many poems as were originally picked out from the set. delete sections that don’t have an accompanying poem.

this feels very related to a project i did in jer’s class last year where i replaced “mortgage” language with “data” language in hank paulson’s 2008 announcement about the economy. python woulda helped with that/made it better. anyway, executive order results here, original here.

another thing i was working on was figure out how to clean up the file without going through manually. these are things i did in the interpreter. i wonder if there’s a way to say if 'space' char appears > or = 2 times, replace it with ' ‘? it’d also be cool to figure out how to split on html tags so i don’t have to manually delete those. maybe this will be useful later.

for line in lines:
  line = line.strip().replace('  ', ' ')
  line = line.strip().replace('   ', ' ')
  line = line.strip().replace('    ', ' ')
  line = line.strip().replace('     ', ' ')
  line = line.strip().replace('      ', ' ')
  line = line.strip().replace('       ', ' ')
  line = line.strip().replace('        ', ' ')
  line = line.strip().replace('&nbsp; ', ' ')
  print line


Jen Kagan | ITPPIT | 2017-03-02 03:52:39

After generating the HTML representation of the questionnaire. Next target was to generate the PDF out of the generated HTML. I have evaluated several libraries to generate pdf out of the generated HTML. Brian who is one of my mentor suggest to use WeasyPrint library for the task. I have used reder_to_string function of the django which takes django template and data to generates the HTML of the template which output as a string. This HTML string then inserted to WeasyPrint library API to generate the PDF. Following code segment use to generate the PDF and respond back to the user.

def post(self, request, *args, **kwargs):

pdfform = self.get_object()
project = self.get_project()
questionnaire = None
questions_list = None
questionnaire = Questionnaire.objects.get(
questions_list = Question.objects.filter(questionnaire=questionnaire)
for question in questions_list:
question_option_list = question.options.all()
question.question_option_list = question_option_list
except Questionnaire.DoesNotExist:

html_string = render_to_string('questionnaires/pdf_form_generator.html',
{'questionnaire' : questionnaire,
'questions_list' : questions_list, 'pdfform' : pdfform})
html = HTML(string=html_string, base_url=request.build_absolute_uri())
pdf = html.write_pdf(stylesheets=[
CSS(string='@page { size: A4; margin: 2cm };'
'* { float: none !important; };'
'@media print { nav { display: none; } }'
response = HttpResponse(pdf, content_type='application/pdf')
response['Content-Disposition'] = 'attachment; filename='+pdfform.name+'.pdf'

The sample PDF which generated from the standard cadasta questionnaire can be found in here.

Kavindya Prashadi Bandara | Stories by Kavindya Peramune Rallage on Medium | 2017-02-28 12:27:40

After completing the views for the project. My next goal was to generate the HTML representation out of the questionnaire to generate the PDF.

Many PDF generating libraries required to input a HTML to generate PDF. So my target was to first generate the HTML representation of cadasta questionnaire. Cadasta’s questionnaires are based on XLSForm standard[1]. Cadasta imports XLSForm standard based questionnaire from the project details page and converted to it’s own representation. Cadasta has three main elements which represent the XLSForm standard based questionnaire.

Question — Question is the main element of the questionnaire. Questionnaire is constructed using multiple questions. There are set of predefined question types. Following are the set of available fields.

                ('IN', 'integer'),
('DE', 'decimal'),
('TX', 'text'),
('S1', 'select one'),
('SM', 'select all that apply'),
('NO', 'note'),
('GP', 'geopoint'),
('GT', 'geotrace'),
('GS', 'geoshape'),
('DA', 'date'),
('TI', 'time'),
('DT', 'dateTime'),
('CA', 'calculate'),
('AC', 'acknowledge'),
('PH', 'photo'),
('AU', 'audio'),
('VI', 'video'),
('BC', 'barcode'),

# Meta data
('ST', 'start'),
('EN', 'end'),
('TD', 'today'),
('DI', 'deviceid'),
('SI', 'subsciberid'),
('SS', 'simserial'),
('PN', 'phonenumber')

QuestionOption — QuestionOption represent the options available for specific type of questions. The questionnaire type S1 and SM has set of options to be selected. So there are represented in QuestionOptions.

QuestionGroup — Some questions are group together. So QuestionGroup represent a question group.

As per the discussions ST, EN, TD, DI, GT and GS fields are omitted from the generated html. I have used separate HTML representation for each type of question which calls by a root django templates to generate the complete HTML representation of the questionnaire.

User can download PDF of questionnaire from the form details page as shown in below screenshot. My mentors Oliver and Brian helped me lot to accomplish this task.

Form Details Page


Kavindya Prashadi Bandara | Stories by Kavindya Peramune Rallage on Medium | 2017-02-28 12:11:36

Working with 9-Patch Images, Adapter Classes, Layouts  in Android.

Before starting this new task I never wondered ..”How does that bubble around our chat messages wraps around the width of the text written by us??”.

The image being used as the background of our messages are called 9-Patch images.

They stretch themselves according to the text length and font size!

Android will automatically resize to accommodate the contents , like–


Source- developer.android.com

How great it would be if the clothes we wear could also work the same way.
Fit according to the body-size. I could then still wear my childhood cute nostalgic dresses..

Below, are the 9-Patch image I edited. There are two set of bubble images which are different for incoming and outgoing SIP messages.

bubble_incoming-9           bubble1-9


These images have to be designed a certain way and should be stored as the smallest size and leave 1px to all sides. Details are clearly explained in Android Documentation–


Then,  save the image by concatenating “.9” between the file name and extension.

For example if your image name is bubble.png.  Rename it to bubble.9.png

They should be stored like any other image file in res/drawable folder.

Using 9-patch images these problems are taken care of–

  1. The image proportions are set according to different screen sizes automatically.
    You don’t have to create multiple PNGs of different pixels for multiple screen sizes.
  2. The image resizes itself accroding to the Text size set in the user’s phone.

I had to modify the existing Lumicall SIP Message screen which had simple ListView as the chat message holder and replace it with 9-patch bubble images to make it more interactive 🙂

Voila! What a simple way to provide a simple yet valuable usability feature.


Urvika Gola | Urvika Gola | 2017-02-28 04:46:59

I worked on creating the whiptail and corresponding gpg scripts for 4 options for primary and/or secondary/subkey generation.

1) A “Quick” Generate Primary and Secondary Key task that only asks the user for the UID and password and creates an rsa4096 primary key, an rsa2048 secondary key and an rsa2048 laptop signing subkey.

2) A “Custom” Generate Primary and Secondary Key task that gives the user more flexibility in algo, usage and expiry, but still adheres to PGP best practices. For the primary key, the user chooses between rsa4096 key or an ECC curve, sign/cert or cert only for usage, and the expiry. For the secondary encryption key, the user also chooses between RSA and ECC, but can choose a key length between 2048 and 4096, and the expiry.

3) Generate Primary Key Only: Same as the primary key generation for #2

4) Generate a Custom Subkey: The user gets to choose between rsa<2048-4096>, dsa<2048-3072>, elg<2048-4096>, and an ECC curve, and choose the usage and expiry. The tricky part was making sure that the usage matched the algorithm. For example, DSA is only capable of sign and auth, while RSA can do sign, auth, and encrypt. ECC curves are capable of all usages, however, encrypt cannot overlap with sign/auth for any curve, even though the name of the curve is the same. So I used radio and checkboxes to make it as easy as possible for the user.

These options follow the best practices outlined at riseup and Debian Wiki pages, such as:

  • The primary key should use a strong algorithm and should only have the usages cert and/or sign.

  • Subkeys can be 2048-4096 bits, preferably RSA, DSA-2 or ECC.

  • UID shouldn’t ask for a comment

  • DSA-1024 is deprecated so I restricted DSA to a minimum of 2048.

Elizabeth Ferdman | Elizabeth Ferdman | 2017-02-28 00:00:00

Before testing the design and content additions to support newcomers, I wanted to document the rationale behind the initial suggestions after several “in-house” design iterations.

Outlining the reasoning of design decisions is a process that can be helpful in producing stronger designs. It forces you to thoroughly think through your design and limits personal opinions when collaborating with others or discussing changes with stakeholders.

Join the Team Section


Join the Team Heading

Assumption: The “Join the Team” heading is vague and is a missed opportunity to briefly explain the mission.

Experiment: Change the heading to “Help Us Improve the Web” to make the page more scannable.

Ways to Contribute

Assumption: Linking to the report form and open issues disorients new visitors who want to learn more about contributing.

Experiment: Add additional context by explaining the process and allowing the current links to serve as calls to action.

Learn More 

Assumption: The “Learn More” anchor text under the “Join the Team” section is vague and unexpectedly directs to the about page.
Experiment: Link to the contributor page, and make the anchor text easy to scan and understandable out of context by changing the sentence to “Learn more about how to contribute.”



Contributor Link

Assumption: Newcomers overlook the contributor page link that’s in the footer.

Experiment: Link to the contributor page in the top navigation to help newcomers quickly find what they need.

Contributor Page Content

“Newcomers often face unfamiliar and rugged landscapes when starting to contribute to an OSS project. Consequently, they need proper orientation to find their way into the project and to contribute correctly.“ – Steinmacher, Igor et al

Contributor Roles

Assumption: Newcomers have a difficult time matching their skills and interests with project opportunities.
Experiment: Include specific roles on the contributor page to make all volunteer opportunities clear.

Success Stories

“ …acknowledge and celebrate contributions, so that people who do contribute feel appreciated and motivated to continue;”Chawner, Brenda
Assumption: Celebrating contributions provides social proof and shows the project appreciates contributors.
Experiment:  Highlight success stories to add social proof, and leverage 3rd parties to build credibility.

Social Media Icons

“Distinct outlines shows the unique shape of each icon without any visual noise. This means users can see the icons immediately without getting interrupted by background borders.”-UX Movement
Assumption: The link to Twitter in the footer isn’t easily recognizable.

Experiment: Use easily scannable icons that have a distinct outline.

Team Photos

Assumption: Adding contributor photos makes it easier to see who’s behind the website and is more personable.
Experiment: Include images of team members on the about page to provide social proof, acknowledge contributors, and emphasize the networking opportunity.

Contact Page

Assumption: The current contact page could be more approachable and less distracting.
Experiment: Add clickable links to several communication channels and style the page to match the rest of the website. Like the contributor roles, the text is left-aligned to match the current style.


Chawner, Brenda. “Community Matters Most: Factors That Affect Participant Satisfaction With Free/Libre And Open Source Software Projects”. iConference ’12: Proceedings of the 2012 iConference (2012): 231-239. Print.


Steinmacher, Igor et al. “Social Barriers Faced By Newcomers Placing Their First Contribution In Open Source Software Projects”. Collaborative Software Development (2015): n. pag. Print.

Mesha Lockett | Outreachy UX Research and Design | 2017-02-27 16:35:44

De tempos em tempos eu preciso restaurar um backup de alguma tabela para outra ou preencher a tabela de um novo ambiente de desenvolvimento local com dados reais para testes. Como não é uma tarefa diária, nem sempre me lembro como faz, então resolvi juntar aqui o passo a passo até pra facilitar nas próximas vezes e talvez possa também ajudar alguém.

Antes de poder restaurar algum backup, precisamos fazê-lo! O comando utilizado para isso é o pg_dump que já nos devolve um dump da tabela inteira – se quiser saber mais clique aqui para ler a documentação.

pg_dump dbname > backupfile.sql

No comando acima, extraímos todos os dados do banco (troque dbname pelo nome da sua tabela) e jogamos em um arquivo .sql. Pronto, já temos o backup salvo. Dica: esse é um jeito bem fácil de manter backup de seus bancos. Recomendo criar um comando que faça esse backup e utilizar o Cron para fazer isso automaticamente sempre.

Para restaurar o backup num banco, o postegres também nos permite fazer isso de um jeito bem simples, mas tem alguns detalhes:

1) o banco precisa existir com o mesmo nome do banco que foi extraído.
2) o banco não pode ter as tabelas criadas, senão vai dar conflito e a restauração não vai dar certo.

Nesse caso, se o banco já existir primeiro eu dou dropdb db_name que vai deletar o banco (IMPORTANTE, eu só faço isso em bancos locais, porque isso apaga todos os dados do banco, então NUNCA faça isso em produção a menos que você saiba exatamente o que está fazendo). Depois de deletar, precisamos recriá-la senão a restauração não vai funcionar por não encontrar o banco. Para recriá-lo: createdb db_name

Por fim, para restaurar os dados, jogamos o arquivo no novo banco:

psql dbname < infile

Pronto, temos um banco restaurado! Para mim esse processo é fundamental em dois momentos: quando por algum motivo – normalmente tenso – eu preciso restaurar o banco em produção ou quando vou começar um novo ambiente de desenvolvimento e preciso de dados para testar o app e as mudanças que estou fazendo.

Ana Rute Mendes | Ana Rute | 2017-02-24 14:56:10

Home · anna-liao/pyslet Wiki · GitHub

The link above is a project wiki I wrote to describe my work and contributions. I also regularly update the status. My project is in good shape, and achieved MVP for bi-directional support for GIFT format. It can parse basic GIFT format question types and read in a file, and generate GIFT format text from Pyslet structures. It can handle well formed and valid inputs. Next will be to add validation to reject invalid inputs. The future for this work is that other OSS contributors will find value in serialising the GIFT format to QTI XML structures and will further develop this GIFT support.

Anna Liao | Stories by Anna Liao on Medium | 2017-02-23 22:32:59

I promise. Got a nasty cold, so wasn’t making as much progress, but still here.

Brief catchup, since I’m in the middle of trying to get a bunch of wrap-up stuff done.

Usability tests with mockups

As I said a few posts ago before I dove into CSS, I needed to do some usability tests with my mockups. I was unable to get any of my original set of interviewees to do this, and due to sickness on my own and Mo’s part and weather interfering, was unable to do any in-person usability testing.

I did get 5 people using my prototype, with a good spread among the tasks I had available.

As mentioned previously, my tasks included what we identified as the most immediately relevant aspects of the project, and the mockups I made for those.

The first page of each of my prototypes and their associated tasks are shown below, with a link to the prototypes themselves in the short description below each mockup.

Prototypes and tasks

The People Prototype
  • You heard that there were going to be events in your local region (Southern California) in the next few months. Using this interface, find one of those upcoming events and show me how you would interact with the interface to find out when the event is, where it’s located, and who to contact about it, and tell me what you are thinking as you do it.
  • You recently attended an event, and are wondering if anyone has put anything interesting on the event page. Using the prototype, find a past event and visit the page, and tell me what you are thinking as you do it.
The Events Prototype
  • You are going to be traveling to Berlin, Germany on a business trip and have a couple of extra days on the tail end of your journey to explore. You wonder if there is a Fedora community of locals that you could meet up with during the trip. Use the prototype to find Fedora folks near Berlin, and tell me what you are thinking as you do it.
  • FLOCK Los Angeles is tomorrow, but you cannot find the address of the venue or directions on how to get there. You need to figure it out before tomorrow so that you can arrange for a ride there. Find a Fedora community member in the Los Angeles area who is online right now to help, and tell me what you are thinking as you do it.
Join us or Sign Up
  • You live near Boston, MA, USA, and someone sent you a link to the Greater Boston Hub. You’ve never used Fedora Hubs before. You want to join the group to keep up to date with what they are doing. Using this prototype, join the group and tell me what you are thinking as you do it.
  • Create a new account on Fedora Hubs using the prototype, and tell me what you are thinking as you do it.
Event Notifications
  • We have a few different notifications relating to regional hubs and events. These would appear in your stream of information called “My Stream”. I would like you to take a look at these and tell me know what you think of them. What do you think you can do here, what do you think they are for; Just look around and do a little narrative.
  • Now, please respond to the first event in the list, either ‘going’ or ‘maybe’. Talk to me about what you expect to be happening here and what you are doing.
  • Please return to the first page using the back button, and select the other option from the first event.

Initial reactions

After two usability sessions, it became pretty clear that any one individual should do one, not both, of the two tasks in people, events, and join or create. Those were much too similar and were causing confusion to be done in a single session.

Similarly, in the initial prototypes, the top-most bar was too realistic-looking, having been from a screenshot of a more visually designed page. As such, to better determine the source of confusion with multiple search bars on the same page, I replaced the search bar with one from Balsamiq.

Some small issues with Balsamiq came up. First, MyBalsamiq did not show what items were linked on the prototypes my users saw. If I looked at them myself, I saw the appropriate markings.

Second, I was unable to have an entire line be clickable, which added some unnecessary confusion. As far as I could tell, this is simply not supported.

I suspect strongly that this experience would have been greatly improved by a note-taker. It’s taking a lot of time to go through the sessions after the fact, identify and gather the relevant information, and come up with a good way to summarize what I found. I do also appreciate the experience and viewpoints of others when collecting and interpreting information.

Once I’m finished collecting together the information from the usability sessions, I will be discussing what I found with Mo, likely doing more affinity analysis, and creating some sort of summary of the results and of the entire experience. I’m not yet clear on what that all will involve, and sort of suspect it’s not likely to be complete by the 6th. Frustrating, but that does happen.

Closing Activities

My internship will be coming to a close on March 6th. I would like to leave things in as clear a state as I can, both to allow others to continue my work, and to make it easier for me to pick it back up when I’m no longer able to work full-time on it.

In addition to the collating, analysis, and summary of the usability testing, I will be finishing up a number of other things. This includes summarizing what events/event planning needs to include, what ambassadors tend to be doing as resources), and making sure all the raw data (transcripts and recordings) are available to Mo.

I’m still pushing people to take the survey, and it looks like some work that Mo and I recently did improved our numbers significantly (from 28 responces to 121!). I’m not sure that I’ll have time, but I’m hoping to do some analysis of that, as well.

I’ve not really had much chance to really understand how one goes from prototype to visual design, which is unfortunate. That is one area that I definitely need more experience with! I may see about working more on that post-Outreachy.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-02-23 22:27:07

In this installment of the TaskCluster Bulletin, I’ll share with a little talk I delivered today, about GitHub integrations. (TaskCluster team has these “TechTopics”, where people volunteer or are asked to tell about something interesting.) If you are completely new to the idea, that might be a video to watch.


And here in this etherpad you’ll find notes for the talk and links to the slides and some resources.

Irene Storozhko | Stories by Irene on Medium | 2017-02-23 19:56:53

I just came back from Toronto, and it was a great week! I met a whole bunch of nice people, first of all my mentors, who turned out to be even more kind and encouraging than over the Internet. It’s wonderful that Mozilla arranges these in-person meet-ups for their interns, so helpful on many levels.

The main news about the last week, however, is that we’ve launched the integration! TaskClusterRobot is officially dead now. Also, I created a Quick-Start page where new users can easily create a configuration file for TaskCluster by filling out a small form (this page is particularly dear to me, because I think it’s the only part of my internship that I was able to do relatively on my own; actually, I think that page was even my idea, because that configuration file had scared the heck out of me personally when I just started with Outreachy).

Also, my mentor Brian fixed the first part of my project, so now the release events handling works perfectly. For the integration, my other mentor Dustin wrote lots of new unit tests. Actually, Brian and Dustin were the ones who got the most stuff done last week: together they closed 9 tickets and made huge updates to the documentation. I started working on a new API endpoint for TaskCluster-Github, which will add some functionality to the Quick-Start: users will be able to enter the name of their org and repo, and the page will tell them if the integration was installed for it. With Dustin’s and Brian’s help I set up a new Azure table and finished the API itself. I even was able to write a small piece of test for it (it even works!).

I hope to devote the last three weeks of my internship to the badges project. I think if I’ll be able to sneak in emojis for build statuses, the project will be a definite success!

Irene Storozhko | Stories by Irene on Medium | 2017-02-23 19:55:22

Yesterday I got the integration working, hooray! I’m happy, because I seriously wanted the integration to work before Toronto… oh my goodness! I haven’t told you about Toronto yet! But I’ll start with the integration :)

So, the last time we left off on me having trouble with getting authentication process right. My mentors helped me with this, and I’ve learned a new trick of “delayed action”, as I call it. Let me demonstrate :)

So, previously TaskCluster was an oAuth app, and the GitHub authentication was done once at the very beginning, during loading:

github: {
requires: [‘cfg’],
setup: ({cfg}) => {
let github = new Github({promise: Promise});

if (cfg.github.credentials.token) {

return github;
// Usage later:

The returned github object was all ready-to-use. This is no good for integration, because you have to authenticate separately for each request with the credentials of the particular installation (instance of integration) that sent the request.

So, what we do is we don’t authenticate at the beginning, but just preparing for it; what we return is a github object with authentication function — at any point in the program you grab that object, pass the installation ID to it, and boom! — you’ve made a custom authentication. This is what it looks like:

github: {
requires: [‘cfg’],
setup: async ({cfg}) => {
let github = new Github({promise: Promise});
Making preparations for authentication, namely, reading the
integration ID and the PEM certificate.
    return {
getInstallationGithub: async (inst_id) => {

Here we authenticate as integration first.
Second, we use these credentials and the installation ID to
generate the authentication token for the installation.

// …and finally:
let gh = new Github({promise: Promise});
gh.authenticate({type: ‘token’, token});

// The very end result is fully ready-to-use github object
return gh;
// Usage later:
var iGithub = await this.github.getInstallationGithub(instID);

Groovy, huh? And after some debugging, I’ve got a working integration! Of course, I’ll have to do some manual testing, and then write some automated tests (or maybe just correct the existing ones), and then we will let users try it out, but generally speaking, this is the victory!

And another big news is that in one week, I’m going to Mozilla’s Toronto office to work with my mentors in person! The release engineering team will be gathering there too, so I hope to meet lots of people. Not to mention that I’ve never been to Toronto, and I’m dying to see it. This is going to be an insanely great trip, and I’m really excited! Thank you, Mozilla! ❤️

Irene Storozhko | Stories by Irene on Medium | 2017-02-23 19:54:29

Integrations are relatively new GitHub feature, which basically makes your app easy to use by other developers. All they have to do is click the “Install” button. This is our goal for TaskCluster-GitHub.

The first problem I have to solve is translating the authentication process from oAuth process into installation process. There are a few differences between oAuth apps and integrations, which means a lot of digging and testing.

First of all, installations authenticate in two stages: first as integration, then as installation, and the trick for me so far has been to find the proper places for these stages. Besides, while the integration ID (necessary for the first authentication step) is one for all installations and thus can be put into environment variable, the installation ID is a fluid thing which should be obtained on the fly from the webhook.

Slowly but surely I’m getting there, however. As of now, the regression tests pass at least, and installation ID is not hard-coded anymore. But one issue still bothers me: in one of our own API methods, we get user email by making a request to GitHub. The very first try works, but all the subsequent fail, because it skips the first authentication step.

My intuition is that this problem is far more complicated than it seems. We’ll see. I’ll be back when I’ve solved it!

Irene Storozhko | Stories by Irene on Medium | 2017-02-23 19:53:57

I already wrote a tiny note on ‘release’ event feature that I added to TaskCluster-GitHub. Here, I’m going to tell a bit more about it.

TaskCluster-GitHub is a CI service that listens to GitHub events and schedules tasks in TaskCluster according to the user settings in the .taskcluster.yml file. There were two such events that TaskCluster-GitHub handled before: push and pull request, meaning that every time somebody committed something to the repository or opened/closed/reopened a pull request, TaskCluster-GitHub would react and schedule the tasks needed (something like cloning the repository, installing the dependencies, compiling the app and running tests).

From now on, our users can also make use of the release event: TaskCluster-GitHub will trigger tasks when a new release or tag is created in the repository. Of course, a tag is nothing more than a branch, so we could use the push event. However, regular push doesn’t contain information specific to tags, and working with a dedicated release event is more convenient. Besides, some users may wish to trigger tasks for pull requests and releases only, or restrict certain tasks to happen exclusively for releases — it is possible now!

The biggest challenge was debugging a typo. I’m glad that typo happened though, because it prompted me to dive deeper into the code and even make a couple of diagrams on paper. Now that I switched to other tasks, these diagrams come in extremely handy. In fact, I think it’s a good practice to make this sort of diagrams when you start reading somebody’s code, because human memory is far less reliable than computers’ :)

Irene Storozhko | Stories by Irene on Medium | 2017-02-23 19:53:02

Has it been three weeks already? Seems like I started my internship only yesterday! This week, I added a new capability to TaskCluster-Github: From now on, the users can trigger tasks for GitHub `release` event. It was a simple but very interesting task, which required me to dive even deeper into TaskCluster-GitHub workflow and module architecture.

I have to add that these past weeks I have been plagued by typos. I have come up with some strategies on minimizing typo effect in the future:

  • When adding a similar code, it’s better to copy-paste.
  • If the editor has auto-suggestion thing, use that.
  • If the editor allows, use font with minimum look-alike symbols, or copy-paste the problematic code into a word processor and try the text with couple of fonts.
  • Ask for help when you are REALLY stuck (gotta practice this one).

I’ll let you know if these measures have proved effective :)

Irene Storozhko | Stories by Irene on Medium | 2017-02-23 19:52:12

we’re talking about ~list comprehensions~ in class so i’m testing this demo code on the executive order i was playing with earlier.

import sys
import random

for line in sys.stdin:
  line = line.strip()
  words = line.split()
  fives = [w for w in words if len(w) == 5]
  print " ".join(fives)

$ only-5-letter-words.py <EO1-clean.txt >test.txt

Jen Kagan | ITPPIT | 2017-02-23 19:28:10