In this blog post I shall discuss the architecture & protocols powering WebRTC. This blog series is for my upcoming WebRTC workshop at the Web Summer Camp, Croatia 2017.

While WebRTC has greatly simplified real time communication on the web through the browser, it’s background comprises of a collection of standards, protocols, and JavaScript APIs! The power of WebRTC is such that only a dozen lines of JavaScript code and any web application can enable peer-to-peer audio, video, and data sharing between browsers (peers).

The Architecture

Screen Shot 2017-08-19 at 13.24.23
WebRTC Architecture

WebRTC architecture consists of over a dozen different standards, covering both the application and browser APIs jointly operated by WEBRTC – W3C Working Group and RTCWEB – IETF Working Group. While its primary purpose is to enable real-time communication between browsers, it is also designed such that it can be integrated with existing communication systems: voice over IP (VOIP), various SIP clients, and even the public switched telephone network (PSTN), just to name a few.

WebRTC brings with it all the capabilities of the Web to the telecommunications world, a trillion dollar industry!

Voice and Video Engines

Enabling RTC requires that the browser be able to access the system hardware to capture both voice and video. Raw voice and video streams are not sufficient on their own. They have to be –

  1. Processed for noise reduction and echo cancellation
  2. Automatically encoded with one of the optimized narrowband or wideband audio codecs
  3. Used with a special error-concealment algorithm to hide the negative effects of network jitter and packet loss


  1. Process the raw stream to enhance quality
  2. Synchronize and adjust the stream
    • to match the continuously fluctuating bandwidth and latency between the clients


  1. Decode the received stream in real-time
  2. Adjust the decoded stream to network jitter and latency delays
Screen Shot 2017-08-19 at 14.00.17
Voice and Video Engines

The fully featured audio and video engines of WebRTC take care of all the signal processing. While all of this processing is done directly by the browser, the web application receives the optimized media stream, which it can then forward to its peers using one of the JavaScript APIs!

VoiceEngine is a framework for the audio media chain, from sound card to the network.

VideoEngine is a framework for the video media chain, from camera to the network, and from network to the screen.

Audio Codecs

Screen Shot 2017-08-19 at 14.24.25
wideband audio
  1. iSAC: A wideband and super wideband audio codec for VoIP and streaming audio. iSAC uses 16 kHz or 32 kHz sampling frequency with an adaptive and variable bit rate of 12 to 52 kbps.
  2. iLBC: A narrowband speech codec for VoIP and streaming audio. iLBC uses 8 kHz sampling frequency with a bitrate of 15.2 kbps for 20ms frames and 13.33 kbps for 30ms frames.
  3. Opus: Supports constant and variable bitrate encoding from 6 kbit/s to 510 kbit/s. Opus supports frame sizes from 2.5 ms to 60 ms, and various sampling rates from 8 kHz (with 4 kHz bandwidth) to 48 kHz (with 20 kHz bandwidth, where the entire hearing range of the human auditory system can be reproduced).

Video Codecs (VP8)

  • The VP8 codec used for video encoding requires 100–2,000+ Kbit/s of bandwidth, and the bitrate depends on the quality of the streams.
  • This is well suited for RTC as it is designed for low latency.
  • This is a video codec from the WebM project.

Real-Time Network Transports

Unlike all other browser communication which use Transmission Control Protocol (TCP), WebRTC transports its data over User Datagram Protocol (UDP).

The requirement for timeliness over reliability is the primary reason why the UDP protocol is a preferred transport for delivery of real-time data.

  • TCP delivers a reliable, ordered stream of data. If an intermediate packet is lost, then TCP buffers all the packets after it, waits for a retransmission, and then delivers the stream in order to the application. 
  • UDP offers no promises on reliability or order of the data, and delivers each packet to the application the moment it arrives. In effect, it is a thin wrapper around the best-effort delivery model offered by the IP layer of our network stacks.
Screen Shot 2017-08-19 at 15.09.13
WebRTC network protocol stack

UDP is the foundation for real-time communication in the browser. In order to meet all the requirements of WebRTC, the browser needs a large supporting cast of protocols and services above it to traverse the many layers of NATs and firewalls, negotiate the parameters for each stream, provide encryption of user data, implement congestion and flow control, and more!

The RTP Stack

  1. ICE: Interactive Connectivity Establishment
  2. STUN: Session Traversal Utilities for Network Address Translation (NAT)
  3. TURN: Traversal Using Relays around NAT
  4. SDP: Session Description Protocol
  5. DTLS: Datagram Transport Layer Security
  6. SCTP: Stream Control Transport Protocol
  7. SRTP: Secure Real-Time Transport Protocol
  • ICE, STUN, and TURN are necessary to establish and maintain a peer-to-peer connection over UDP.
  • DTLS is used to secure all data transfers between peers; encryption is a mandatory feature of WebRTC.
  • SCTP and SRTP are the application protocols used to multiplex the different streams, provide congestion and flow control, and provide partially reliable delivery and other additional services on top of UDP.
  • Session Description Protocol (SDP) is a data format used to negotiate the parameters of the peer-to-peer connection. However, the SDP “offer” and “answer” are communicated out of band, which is why SDP is missing from the protocol diagram.



Princiya Marina Sequeira | P's Blog | 2017-08-19 14:19:34

This past week I finally got around to fixing the very first bug I ever filed against the AppDB.

It wasn't really a bug per se, but an enhancement request: add a field to the test report form for changes to default settings in winecfg (Wine's configuration tool). The bug has been gathering  dust in Wine's bugzilla since 2008, but the reasons for wanting the enhancement are still valid.

The AppDB ratings systems differentiates between applications that work as well as on Windows  "out-of-the-box" (platinum rating) and ones that can be made to work as well as on Windows with some effort (gold rating). False platinums submitted by users who did in fact use various tweaks to get the application working has been a chronic problem, as has the issue of users who give gold ratings but fail to specify the workarounds needed to achieve that.

I was already familiar with the test report form code from having grappled with the distributions issue, so adding another field to the form and database would not be a difficult task. The challenges here were in the design and wording of the interface: my goal was to reduce user errors, not create more.

The first issue was what to name the field. My bug report just mentioned changes to winecfg, but in reality the kinds of things that might need to be done to make an application work  encompassed more than that, such as changes to registry settings or installation of runtimes or codecs. In addition, many users make such changes through winetricks, a third party script, and don't think of them as changes to winecfg.

I needed a broader word that would encompass all possible fiddling, and in the end it came down to a choice between "tweaks" and "workarounds." I chose the latter because that's the word most commonly used in bugzilla.

The next issue was deciding what kind of field to add. I knew we needed a wysiwyg textarea that would allow users to say whatever they wanted/needed to say, but I also knew from similar fields already in the form that such a field is not useful for restricting ratings precisely because users can say whatever they want. If users could simply type in "None" in a Workarounds field, then the presence of content in that field couldn't reliably be used to restrict platinum ratings.

So I decided to add two fields to the Workarounds section. One is a simple yes/no question with radio buttons: "Were any workarounds used for problems that do not exist in Windows?"  The other is a wysiwyg textarea for users who answer "yes" to describe the workarounds they used.

With that in place, I added checks to the form processing to make sure the submitted rating is consistent with the answer to the Workarounds question. Users are now prevented from submitting platinum ratings if they report having used workarounds, and users who submit gold ratings without having described the workarounds used are now required to supply that information.

Could users willfully bypass those checks by, for example, putting workarounds information in another field and giving a platinum rating? Sure. However, I don't believe many (if any) of the incorrect ratings we've seen have been willful attempts to mislead, and if there are any, they will still be caught by the maintainer or admin processing the submitted form.

Rosanne DiMesio | Notes from an Internship | 2017-08-19 07:16:15

I joined COSCUP2017 which is held in Taipei from August 5 to 6. ‘COSCUP’ means Conference for Open Source Coders, Users and Promoters, is an annual conference held by Taiwanese Open source community participants since 2006. It’s a major force of Free software movement advocacy in Taiwan.

People of different careers, different ages, different areas and different organizations went there to talk about their experience about FOSS. Especially some organizations such as WoFOSS and PyLadies in Taiwan, they talked about how to make community diversified and how to respect women in a community.

I also gave a speech at this conference about how college women participate in FOSS easily, I shared my experience during Outreachy and GSoC in my talk and encouraged other women to join them. My speech attracted many college women and I hope my story can give them some useful information and more confidence.


Mandy Wang | English WoCa, WoGoo | 2017-08-17 15:27:47

This week I have improved and made some finishes to the delay part of the feature.
I have explored the tests directory and understood how and what should be tested, and I am feeling very anxious to release this feature to Beta!

The UI part is practically done (has not merged yet, I do believe it will be merged soon), the major components are well placed and robust enough and I am just waiting for today check-in with my mentors to see where we heading with it 🙂

Ela Opper | FoxyBrown | 2017-08-17 11:14:11

Sandhya Babanrao Bankar | Kernel Stuff | 2017-08-17 02:47:09

Storyboards that contained only code were omitted from this post (#s 11, 12.5, & 20)

Angela Pagan | Coloring Book Intern Blog | 2017-08-16 20:23:21

Here is a gif from my latest aframe experiments for Lightbeam.


For Mozfest 2017, I had submitted the following proposal – ‘Lightbeam, an immersive experience‘. While the proposal is still being reviewed, I have been experimenting with Aframe and the above gif is an initial proof of concept 🙂

Here is the excerpt from the proposal:

What will happen in your session?

Lightbeam is a key tool for Mozilla to educate the public about privacy. Using interactive visualisations, Lightbeam’s main goal is to show web tracking, aka, show the first and third party sites you interact with on the Web.

In this session, the participants will get to interact with the trackers in the VR world thus creating an immersive Lightbeam experience. With animated transitions and well-crafted interfaces, this unique Lightbeam experience can make exploring trackers feel more like playing a game. This can be a great medium for engaging an audience who might not otherwise care about web privacy & security.

What is the goal or outcome of your session?

The ultimate goal of this session is for the audience to know and understand about web tracking.

While web tracking isn’t 100% evil (cookies can help your favourite websites stay in business), its workings remain poorly understood. Your personal information is valuable and it’s your right to know what data is being collected about you. The trick is in taking this data and shacking up with third parties to help them come up with new ways to convince you to spend money and give up more information. It would be fine if you decided to give up this information for a tangible benefit, but no one is including you in the decision.

Princiya Marina Sequeira | P's Blog | 2017-08-15 14:49:24

Green anole

Green anole

“Adventure. Excitement. A Jedi craves not these things.” Yoda – The Empire Strikes Back

This is a summary of my reflections on what was learned this summer. Some things were learned explicitly but I think the items that will stay with me were learned implicitly (by observing how others worked).


  • The need to have more depth in my area of knowledge (currently JavaScript: with some emphasis on React/Redux) (KD)
  • When maintaining an app, it is a good idea to read the existing code a bit more thoroughly (understanding the code line by line) (KD)
  • The need to be better about refactoring code before requesting a review (MT)
  • The need for improved code error handling (MT)
  • The need for improved definition of use cases (being more thorough about thinking out typical input/output scenarios before coding) (MT)
  • The argument for making atomic commits (MT)


  • Relative to soft skills like making people happy or making people feel comfortable, coding is a relatively easy task
  • Also communication sometimes feels hard compared to coding

Carol Chung | CarroTech | 2017-08-15 06:04:13

The two memorable events of the past week were:

  • Getting my first patch for Firefox Nightly approved
  • Having a D’oh programming moment with my mentor about error handling

About the Nightly patch, I must admit that the experience feels so surreal that it really hasn’t registered emotionally. All I can recognize in my head is that the source code for Nightly is immense and the change I contributed feels molecular relative to an ocean. My mentor and a group member provided immense help in guiding me through the setup and review process.

Years ago, I worked as a Technical Writer for different enterprise business applications and had this sort of outsider’s feeling that the actual event of a software release is like a small miracle. That it is possible to produce a working piece of software that combines years of work with the ongoing changes of so many hands, well, it is really something amazing. Thinking about the Nightly releases reminds me of this.

About the D’oh moment, there was a point in my coding process when I had to troubleshoot an unexpected configuration change in the application. My mentor was trying to help me troubleshoot. The irc convo went something like:

>cch5ng: the error looks like *****. but when I manually update the configs it gets resolved. it still fails in the automated tests though.

>mentor: it looks like the code is blowing up at line ***.

>cch5ng: for some reason the automated test doesn’t pick up on the latest config changes.

>mentor: it looks like the code is blowing up at line ***.

>cch5ng: blah blah blah blah blah blah…

>mentor: it looks like the code is blowing up at line ***.

>cch5ng: ummm, do you mean I should be adding some error handling there for a missing config or unexpected data response?

Lessons learned:

  • Try to be better about defining the cases where error handling should be defined up front
  • Also separately: try to be better about defining the use cases; for ex. be thorough in the definition of different input states the code may need to handle
  • (Tip) When waiting for automated tests or build processes to complete, this could be a time to work on a blog…

Carol Chung | CarroTech | 2017-08-15 05:51:12

Yesterday I mentioned a discussion I was involved with on Facebook in which someone on the board of UXPA Boston suggested that I could organize a program for UX newbies and career changers.

I’m really pleased by this idea, and very glad she suggested it. However, before I bring my ideas to the board and get advice and help, I want to have slightly more clue than I currently have.

So, research!

The best way I can think of to get more clue is to talk to people in the UX space. I’d like to talk to other people who are new, people who do the hiring, and people who are working in UX with other UX team members.

UX Job Seekers

Based on my instincts and some of the suggestions on the FB discussion, I suspect people trying to get into UX full-time struggle with:

  1. Getting experience
  2. How to best structure their portfolio and resume
  3. Becoming known to companies

Some off-the cuff ideas of ways to help with these:

  1. Internships, co-ops, programs like Outreachy/Google’s Summer of Code/akamai’s technical academy, mentorship, apprenticeship, small multi-person design projects, and UX hackathons
  2. Finding mentors, having get-togethers to review portfolios and resumes (among each other), and developing sustainable ways to get feedback from hiring managers
  3. Things that I listed in option #1, company visits, and informational interviews

UX Hiring Managers

I have a lot of interesting ideas above, but I would need to know more about what hiring managers are looking for to understand what would be most useful.

For example, in an ideal world, what do hiring managers want to see from candidates? What would be most useful to determine if they want to take a chance on someone? What do they want to see them do, have done, or be interested in doing? What do they _not_ want to see? What do they struggle with figuring out, but very much want in their employees?

People currently on UX teams

Of course, not only do I need to know what hiring managers look for, but I’d like to better understand what people look for in their co-workers.

Such as, what do UXers find most useful when working with other UXers? What do they especially dislike? How well do their hiring practices seem to tease these out? What do you most appreciate in your co-workers?

How can you help?

If you are in UX, or trying to get into UX, talk to me! Comment or email me!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-08-15 01:27:07

Updates to the error code E0611 which handles named and anonymous conflicts w.r.t lifetime errors

In my last post on E0611 , I talked about the purpose behind the error code and the coding process. This post will talk about how the error message has been modified from

error[E0611]: explicit lifetime required in the type of `x`
11 | fn foo<'a>(x: &i32, y: &'a i32) -> &'a i32 {
| ^ consider changing the type of `x` to `&'a i32`
12 | if x > y { x } else { y }
| - lifetime `'a` required

to something like this.

error[E0621]: explicit lifetime required in the type of `x`
11 | fn foo<'a>(x: &i32, y: &'a i32) -> &'a i32 {
| ---- consider changing the type of `x` to `&'a i32`
12 | if x > y { x } else { y }
| ^ lifetime `'a` required

How did this come about?

When E0623 was designed about, the hir::Ty corresponding to the argument was highlighted using span_labels instead of the argument itself( what we are doing here).

error[E0623]: lifetime mismatch
--> $DIR/
11 | fn foo(x: &mut Vec<&u8>, y: &u8) {
| --- --- these references must have the same lifetime
12 | x.push(y);
| ^ data from `y` flows into `x` here

error: aborting due to 2 previous errors

Coding it

This was an easy task as previous code was just being reused.

let span_label = if let Some(ty_anon) = self.find_anon_type(anon, &br) {
ty_anon.span // what we are concerned with
} else {

End result is a more precise error message.

Gauri P Kholkar | Stories by GeekyTwoShoes11 on Medium | 2017-08-14 18:20:48

I’m back with more about Patternfly’s navigation bar user dropdown.

More from developer

I’ve done a brief, remote, contextual interview with the developer who originally asked for this to be researched. With this, I confirmed a few things about what his concerns are:

  • Accessing items within a dropdown takes more time and more clicks than without one
  • Dropdowns can be extra slow to interact with when on a slow network connection, especially when animations are involved
  • It’s easier to remember where to go to get to menu items that are top-level rather than under a dropdown
  • Frequent use items need to be easily accessed and easily discovered

Discussion with patternfly UX researcher

I’ve started a conversation with the UX researcher at Patternfly, Sara Chizari. In large part, I wanted additional perspectives on the problem. I was also hoping to learn if there is existing research on this topic that I’d missed.

My inclination is that the major goal of this research is two-fold:

  1. In the specific case of the developer I’m working with, what are the best guidelines for the use — or lack of use — of navigation bar dropdowns.
  2. In general, we need guidelines for the use of dropdowns.

I expect that these will also change with the display screen size: limited space constrains what can be at the top level.

How do we figure this out?

I’m not yet certain of the best way to go about figuring this out, which is part of what I’m discussing with Sara.

In this particular case, the dropdown is not expected to contain high-use items. While useful, I wouldn’t expect things like ‘settings’ and ‘log out’ to come up during the course of everyday use of an application or webpage. It’s difficult to be sure what other categories of items people are likely to wait to use here, but the real-world examples I have are suggestive:

It looks like basically everyone includes settings and sign (or log) out. Many also include help. Of these, I would expect that sign out would be highest use, especially for those folks who access the applications on computers that are not their own.

Because these won’t be high use items, I’m not yet sure how best to create tasks for people to do during a usability session. I don’t think I want to overemphasize actions that they might not otherwise do, as it’ll make it somewhat difficult to identify the highest use items. At the same time, I need to have people try different prototypes of the menu and menu area to see how they turn out in practice.

What do I think so far?

My instinct suggests that we will specifically want to test the usability of a few different things:

  • Dropdown of 3 or fewer items vs not being in a dropdown.
  • Logout, settings, and/or help being inside or outside of a dropdown.
  • Mobile vs tablet vs computer monitor

These feel like they will address the ‘dropdown vs not dropdown’ item number cuttoff point on various screen sizes, and the specific menu items that I believe to be the most frequent use.

I may want to identify the most used items in those dropdowns, before I go into more specific testing as per the above list. I’m not yet certain of the best way to approach this, however.

Now what?

Patternfly dropdown

Sara will be doing some literature research this coming week, and will then be busy until mid-Sept on her own projects. I’m hoping to figure out the kinds of things to be testing with the aim of starting usability sessions in September.

UX Newbies and career changers group?

In the meantime, due to conversations with the local UXPA group on Facebook, I’ve started investigating both problems and potential solutions facing UX newbies and career changers within the Boston area. The major goal here will be to figure out what types of things are interfering with getting new people into UX jobs, coming up with concrete things to do about them, and figuring out how to make those things available to people locally. I’d love additional perspectives and ideas, since I am only one of many folks trying to get into UX, and will definitely not have thought of all the obstacles (or possible solutions!).

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-08-13 19:51:14

In this post I shall discuss all (most of) the math involved in the visualisations. A quick recall:

Lightbeam is now responsive and fully interactive.

Drag, zoom and responsiveness!


The math behind tooltips is the sequel to my blog post Lightbeam – Tooltips (SVG).

Read this blog post to understand Lightbeam’s migration from SVG to Canvas.

Screen Shot 2017-08-13 at 14.53.15

Ignore the transforms and the inversions (this.transform.invert) in this post. Those are part of d3-zoom and explaining the math of this and d3-force is beyond the scope of this blog post.

mousemove event is registered on the canvas element itself.

Screen Shot 2017-08-13 at 15.47.56

The mouse <clientX, clientY> positions are re-calculated w.r.t the canvas’s bounding rectangles. This ensures the mouse coordinates are confined to the canvas’s area.

getNodeAtCoordinates(x, y) returns a node, if a node is present at the given <x, y> values.

D3’s force layout has simulation.find(x, y[, radius] which returns the node closest to the position <x, y> with the given search radius. I chose to write isPointInsideCircle() to find out if a node exists at the given <x, y> values. The intention here is to isolate the logic from D3 specific as much as possible.


When you hover over the canvas, and if the mouse coordinates are inside any circle, then there is a node present at these coordinates.

Screen Shot 2017-08-13 at 16.30.39

The point <x, y> is

  1. inside the circle if d < r
  2. on the circle if d = r
  3. outside the circle if d > r

Square roots are expensive. Hence d is compared with r*r!


Tooltip position:

Screen Shot 2017-08-13 at 16.43.45

The tooltip has position: absolute.

Because of this property, there is a need to check the tooltips’ left property doesn’t exceed the canvas’s right property, else there will be horizontal scrollbar on the parent container because of overflow-x.

x+tooltipWidth >= canvasRight takes care of the overflow and sets left to x-tooltipWidth.

Setting left to x-tooltipWidth/2 ensures the tooltip arrow is centre aligned to the node.


If a favicon exists for a given node, then it is drawn.

Screen Shot 2017-08-13 at 16.57.19

The favicon is drawn at the centre of the circle (firstParty) or triangle (thirdParty).

Screen Shot 2017-08-13 at 17.00.02

A square that fits exactly in a circle has a side length of sqrt(2) * radius.

firstParty & thirdParty nodes

Given that we are drawing on a canvas, firstParty is a circle on the canvas. thirdParty is an equilateral triangle.

Screen Shot 2017-08-13 at 20.18.35

Given the centre of the circle is at <x, y>, r is the radius of the circumcircle and dr is the radius of the incircle.

equilateral triangle
equilateral triangle

zoom and drag

d3-zoom and d3-drag are used to achieve the zoom and drag behaviours respectively. It is quite complex when the two are combined. If you click and drag on the background, the view pans; if you click and drag on a circle, it moves.

d3-drag requires a dragSubject. I am using the same getNodeAtCoordinates(x, y) function which is used to show the tooltips and the logic remains same. This is how drag and zoom are combined for Lightbeam. If there is a node, (dragSubject) then it drags, else it pans.

Here is the d3-zoom implementation.

Screen Shot 2017-08-13 at 20.43.30

The tricky part here is the need to distinguish between two coordinate spaces: the world coordinates used to position the nodes and links, and the pointer coordinates representing the mouse or touches. The drag behaviour doesn’t know the view is being transformed by the zoom behaviour, so we must convert between the two coordinate spaces.

This is where transform.invert or transform.apply come into play.

I hope I have done justice to the math in this post!

Princiya Marina Sequeira | P's Blog | 2017-08-13 18:50:23

Designing for the User

The first part of this project was to create a rough draft of the package, and a set of both unit tests and integration tests.

For the first several weeks, I only used the package locally from the project folder, which turned out to be much easier than writing a working, pip-installable package.

So that was the next goal to accomplish—but I also wanted to ensure to follow best practices and maintain stability.

These tasks presented many issues I had not yet considered, and many of them I did not (yet) know how to answer.

Writing a package that will be used by other people poses an important question.

“How will other people use this module, and how do I make it most useful to them?”

This week, I will be adding my tests to a Jenkins pipeline, so that we can see them in action.

I expect that this will help me answer some of the questions I’ve been struggling with.

Without this feedback, most of the decision-making about design and implementation is just guesswork.

Best Practices and Stability

I knew that I would be deploying this package to PyPi, so I designed it according to the structure of a pip package from the beginning.

This simplified the first stage of the process, as the most basic requirements were already fulfilled.

However, I have done quite a lot of rearranging and re-rearranging in the process of deciding what this package should look like.

There were also some changes that needed to be made so that axe-selenium-python would run within another project.

(These mostly concerned correctly referencing files, fixtures, and tests within the package.)

I did get some excellent code review from Tarek Ziade, a member of the Firefox Test Engineering Team.

Tarek has written multiple books on python, so I was a little intimidated when he offered to review my code.

However, I strive to produce the best code possible, so I always welcome constructive criticism.

He pointed out several things I had either missed or hadn’t considered.

I credit his feedback for helping me take this package from a rough draft state to an early-stage MVP.

One of the changes that I needed to make to improve the stability of the project was to remove the absolute path to the JavaScript file, and to make the file path OS-independent:

_DEFAULT_SCRIPT = os.path.join(os.path.dirname(__file__), 'src', 'axe.min.js')

This script grabs the JavaScript file in relation to the module currently being executed.

I did find that something interesting happens with this line of code, however.

If I run tests from within the project folder, it will look for the src directory within the tests directory.

If I run tests externally, within another project, it will look for the src directory within the top-level directory of the package, axe_selenium_python.

This code also uses os.path.join to create the OS-independent file path.

For example, if this package is run on Windows, the file path will use back slashes: axe_selenium_python\src\axe.min.js.

And forward slashes will be used for Unix-based operating systems.

Deploying to PyPi

I had some difficulty figuring how to upload to the Test PyPi site.

It seems that they are in the process of migrating from to, which has created much frustration for other developers as well.

Much of the documentation I found was outdated, and I was receiving “server gone” errors when trying to upload to the test site:

Server response (410): Gone (This API has been deprecated and removed from legacy PyPI in favor of using the APIs available in the new implementation of PyPI (located at For more information about migrating your use of this API to, please see For more information about the sunsetting of this API, please see


I did finally get it working, however.

and there was much rejoicing - monty python

Here’s the wiki page that finally got me past this point, if you find yourself in the same position.

Run It!

As expected, installing the package from pip presented new problems that didn’t exist when running it locally, like the previous issue with the JavaScript file.

Another problem I encountered involved the use of pytest fixtures.

I have gone back and forth on whether to use the fixtures at all. As of now, there are some instances where I do, and some where I don’t.

The users are free to use either method.

Pytest Fixtures

What is a fixture?

If you’re interested, here is the technical description of a pytest fixture.

As I understand it, the fixtures make some tasks easier and less wordy in their implementation.

Members of the Test Engineering team have written and use many different fixtures, and for different purposes.

A simple example is the base_url fixture. This fixture pulls the base_url setting from a config file, such as tox.ini, and uses it for selenium-based tests.

This removes the need to either specify the URL every time the tests are run, or to hard-code it within your tests (which is generally recommended against).

A more complex example is the selenium fixture.

Instantiating a WebDriver instance requires a few lines of code:

from selenium import webdriver

driver = webdriver.Firefox()

This same task can be implemented simply by passing the selenium fixture as a parameter in your test function:

import pytest_selenium

test_python_home_page(self, selenium):
  assert "Python" in selenium.title

(This example assumes that base_url is set to in your config file.)

This implementation also does not require closing the WebDriver instance at the conclusion of the test; pytest-selenium will do this for you when the test ends.


The fixture that I wrote simply creates an instance of the Axe class, using a selenium instance.

When running tests locally, I had my fixture within the file.

If users do want to use the axe fixture, I didn’t want them to have to manually modify their

So, I wrote a very simple plugin, pytest-axe, to enable the use of this fixture.

Sometimes the fixtures makes testing a little more simple, but there are some tasks that can’t be accomplished when using fixtures.


Another thing I have been struggling with is whether or not I should be using pytest-selenium at all.

I went back and forth with this a bit in the beginning. For the sake of time, I decided to proceed with pytest-selenium.

It really isn’t possible to know what users will want at this point, so instead of trying to produce something perfect from the beginning, my focus is to produce something usable.

Jenkins Testing

As I said, this week I have been focused on running my tests in a Jenkins environment.

This should help me to make more informed decisions on my implementation.

Currently, the test suite I have been working with is mozillians-tests.

This is a series of tests for the public Mozilla phonebook, a directory of Mozilla employees and contributors.

I am experimenting with using an all-in-one test & report vs. a set of individual rule tests.

While a single test would still provide helpful feedback, there are a couple of issues with this approach.

If a single accessibility rule is violated, the test is marked as a failure.

There is also no way to xfail individual rules.

xfail is a decorator to indicate an expected failure. This allows test suites to return an OK until the problem is fixed. Once the test starts passing again, a flag is raised to the test team, signalling that the test was expected to fail, but is now passing.

So it definitely seems like individual rule tests are the way to go. This is a bit more difficult to implement.

Here are my tests for the accessibility rules.

I have been playing with different approaches to accomplish this goal.

Considering there are only a couple of weeks left of this internship, solving this problem is the highest priority at the moment.

None of these approaches are particularly pretty at the moment, but I’m confident that I’ll have a more usable and stable implementation by the end of this week.

The post Mozilla Internship: Writing a pip Installable Package appeared first on Kimberly the Geek.

Kimberly Pennington | Kimberly the Geek | 2017-08-13 16:46:01

This snippet of code made me think and then overthink 😧. Though when I did figure out what was going wrong, there was a smug smile on my face 👼. It’s difficult not to lose your head in such times but this time, I felt like a brave warrior 💪 , contradictory to the fallen warrior-like feeling when I debug erros.

struct Foo {
field: i32,
impl Foo {
fn foo<'a>(&self, x: &Foo) -> &Foo {
if true { x } else { self }

I have already written code to incorporate structs in EO623 and the ui tests seemed to be working fine. But the below snippet failed.

  1. It did not give the E0623 error message ( expected).
  2. The logs showed that the find_anon_type() was returning false(???). Much to my surprise, similar testcases worked for other similar ui tests 😕. That called for a *debug mode*.
pub fn try_report_anon_anon_conflict(&self, error: &RegionResolutionError<'tcx>) -> bool {
let (ty1, ty2, scope_def_id_1, scope_def_id_2, bregion1, bregion2) = if
.is_some() &&
.is_some() {
if let (Some(anon_reg1), Some(anon_reg2)) =
self.is_suitable_anonymous_region_for_anon_anon(sub)) {
let ((def_id1, br1), (def_id2, br2)) = (anon_reg1, anon_reg2);
let found_arg1 = self.find_anon_type(sup, &br1);
let found_arg2 = self.find_anon_type(sub, &br2);
match (found_arg1, found_arg2) {
(Some(anonarg_1), Some(anonarg_2)) => {
(anonarg_1, anonarg_2, def_id1, def_id2, br1, br2)
_ => {
return false;

I examined the code for find_anon_type() and well, I examined it after writing a few debug! statements here and there to figure out the flow.

if let hir_map::NodeItem(it) = self.tcx.hir.get(node_id) {
if let hir::ItemFn(ref fndecl, _, _, _, _, _) = it.node {
return fndecl
.filter_map(|arg| {
let mut nested_visitor = FindNestedTypeVisitor {
infcx: &self,
hir_map: &self.tcx.hir,
bound_region: *br,
found_type: None,

NodeItem() does not cover impl items and traits. My initial thought was that my examples are wrong. Examining this part of the code suddenly made the problem clear to me.

This is how I changed it.

else if let hir_map::NodeImplItem(it) = self.tcx.hir.get(node_id) {
if let hir::ImplItemKind::Method(ref fndecl, _) = it.node {
return fndecl
.filter_map(|arg| {
let mut nested_visitor = FindNestedTypeVisitor {
infcx: &self,
hir_map: &self.tcx.hir,
bound_region: *br,
found_type: None,

Similarly I added a NodeTraitItem too. Now the example works fine 😊. There’s a minor shortcoming though. The below example now generates E0621 despite the fact that the disable of the error message on traits and impls ( yet to figure out what’s happening there).

trait Foo {
   fn foo<'a>(x: &i32, y: &'a i32) -> &'a i32;
impl Foo for () {
   fn foo<'a>(x: &i32, y: &'a i32) -> &'a i32 {
     if x > y { x } else { y }

And well, the day continues 💻

EDIT: The other issue is fixed. Turned out I had changed the function to not return a None for that case.

Gauri P Kholkar | Stories by GeekyTwoShoes11 on Medium | 2017-08-13 12:23:36

Getting enlightened by the keynote session Jimmy Wales in conversation with Gabriella Coleman and attending a session about “How good communication can help you talk to reporters, rule the world, and do other cool stuff”.

Communication — one of the things I want to improve with the growing network and increasing opportunities to meet new people all around the world. Who is better to guide you than the Wikimedia Communications team! The session was fun, we had a activity in which we had to describe Wikipedia in simple terms. There are all kinds of view points you have to consider while introducing a new topic.

I have to run for attending other sessions and meet more like-minded people, so I’ll list down few points which struck me.

  • Placing your story out there.
  • For explaining Wikipedia you could say it’s a free encyclopedia but that won’t work because people don’t know what encyclopedia is. Words that are simple can be understood by all the cultures.
  • Challenge yourself while describing these things depending on your audience.
  • Technical explanations to describe what you want to convey.
    For example, use cases — as Sage Ross encourages and that’s how I was able to get started with the project. Sage had listed use cases for the profile pages project which was beginner friendly.
  • For the open source community, which wants beginners to participate and contribute , having use cases for the tasks/issues which need to be resolved is a great way for new comer audience to get started and connect with the community.
  • While developing a project , you can borrow from some of the work done by already existing projects, and don’t have to develop from scratch. So, if you succeed with your work then do share it. In this way the knowledge, software in this world gets refined, improved.. That’s in a way Wikipedia is about.
  • Publish stories, humanize.
  • LET PEOPLE KNOW ABOUT YOUR WORK!! Once you forget and if the records don’t exist then what’s the fun in that!
  • Because you are busy growing and exploring more with your work and fun contributions —you should put your latest work out there today — DON’T WAIT, Ur posts, pictures, videos will be refined with time. (HAPPY SHARING)
  • For Editors being noticed — Framing it in a way to introduce to people who are not aware of it.

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-08-12 01:01:01

I’ve met with Mark, the owner and developer of Querki, a couple of times now. I shall now do my best to summarize my understanding thereof, with the purpose of identifying any obvious gaps in my knowledge and any clarifications that may be needed.

Querki is intended to support and encourage collaboration about and sharing of information within communities. The longer blurb on the Querki help page is:

It is a way for you to keep track of your information, the way you want it, not the way some distant programmer or corporate executive thinks you should. You should be able to share that information with exactly who you want, and it should be easy to work together on it. You should be able to use it from your computer or your smartphone, having it all there when you need it.


Everyone has a lot of information to keep track of, much of which they would also like to be able to share and discuss with others. Querki offers a customizable interface in which to manage, display, discuss, share, and explore small to medium data sets with small to medium-sized groups.

An existing example of this is from the Cook’s Guild in the local SCA chapter: they have recipes from specific time periods, and they figured out reconstructions of those recipes so that they can be made nowadays.

As you can see in the screenshot above, the recipes are categorized by type of food, period of food, and culture. Clicking on any of those — also known as tags — will bring you to a list of relevant recipes.

Many of Querki’s useful abilities are currently only possible using the Querki programming language (QL, said as ‘cool’) — such as finding a recipe for 14th century French pancakes in the above cook’s guild space. In the future, the plan is to make common tasks easy to do without the use of QL.

Basic Usage


To view a Querki space, one only needs a link to said space. Precisely what a space will look like varies depending on the desires of the owner of that space.

One of the topics that Mark and I are currently discussing is the idea of a basic default structure for a space . This would hopefully mean that those who don’t want to spend a lot of time structuring their space will still have usable spaces for people to access, discuss, and interact with the data. For those who want to affect structure, that can be done when one has time and inclination, smoothly allowing the transition from a basic Querki space structure to whatever modifications are desired.


A Querki space is meant to be a place for information to be stored and shared. To do this, however, one needs to tell Querki how you want your information to be structured. A model is how you tell Querki the structure you would like for your information.

For example, a model for a CD might include properties for the album title, author, song lists, genre, and publication date, as well as an auto-generated name of the model. Properties can be added when the model is first created, as well as after the fact.

Properties have types. Types include the tag type, the text type, the photo type, and the views type, among others. Properties can themselves have properties, such as with tags. Tags are both the name of a thing and have the possibility of pointing to a related model. Tags may have a description, visible when the tag name is clicked, or simply show a list of things with that tag.

Views are ways to display models. The current default view shows a list of things with that model associated with it. There is also the possibility of a ‘print view’, which will tell Querki how to print the model.

Models will have instances of that model: rather than the generic properties that models contain, instances contain information specific to the instance of that model. In our CD model example, you might have the CD Zoostation by U2, as an instance of the CD model.

In addition to models and their associated pieces, Querki has pages. These are unstructured, and may be understood as a report from a data base, or a wiki page.

What else?

Major goals of Querki

  • Allow for integration with existing social networks in order to help people connect with and invite people they know to work together on Querki.
  • Get Querki to the point of general availability (it’s currently in beta) and having people interested in paying for it. It’s not yet clear what this entails. More investigation required.

Different skill levels of users

Currently, there is an idea of an ‘easy’, ‘medium’, and ‘hard interface. These largely describe the degree to which one needs to be able to program to get the interface to do what you want.

  • The “easy” interface is meant to allow people to use a published template (aka ‘an app’) from an existing Querki space to structure their own, and to use someone else’s space.
  • The “medium” interface allows more customization, but doesn’t present the more complicated/confusing programming options to the user.
  • The “hard” interface is meant for hard-core programmers, allowing the use of every tool available in Querki. This allows the building of templates (apps) and lots of power user commands through the underlying programming language QL (pronounced “COOL”).

It is not currently very clear to users what their options are for using QL.


Search is very basic right now, with searches being within a Querki space, on plain text strings. The goal in the future is to include the ability to search across spaces as well as objects, including tags.

There are currently icons for editing a page, refreshing a page (with your changes?), and publishing a page (for those spaces which do not want changes to happen immediately during editing). Are these reasonable things to have as icons? Do they need text also/instead?

Mobile is very important! Consuming a page should be possible even on small phone screens. Editing should also be possible on a mobile phone. Designing a page for a phone isn’t likely due to small screen, but planned for tablets.

Data manipulation/query building talks about the need to do some basic filtering and sorting of the information in an instance. We need to figure out the most common queries of this type, and see how many can be abstracted away from the underlying programming language for use by anyone/everyone.

Specific pages in need of (design?) work: front page, help, contextual help, model design page/advanced editor. The programming UI needs help (see the design page), and likely needs a simple IDE.


Querki spaces are mostly publicly visible, which should help come time to improve the login page/start page.


Tag names cannot currently be the same name as the model associated with them, due to the fact that tags point to a related model rather than containing it. This may need to be invisible to users to avoid confusion?

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-08-11 18:05:42

Meeting Sage(yay!) and lots of other amazing-smart-fun people, getting the feeling of being an Wikimedian and most important hacking on the Dashboard project!

Selfie with Sage!

Finally, I got to meet Sage Ross, in addition to being a amazing mentor he is a great person to hangout with. Sage introduced me to many of his Wikimedian friends(lots of them). Meeting everyone in person feels great after working remotely for a long time now.

Rachel(left), Me(middle), Shristi(right)

I got to meet Srishti Sethi, an Outreachy/GSoC administrator from India and Rachel Farrand, one of the organizers of Wikimania’17. We had great interaction and plan on meeting again.

Camelia Boban from Italy!

Meeting Camelia Boban from Rome, Italy.

Camelia joined us to work on the Programs and Events Dashboard. Currently the stats displayed on the course home page represent all the edits made by the editors enrolled in that course on Wikipedia. Camelia wanted the Outreach Dashboard to display stats only for the articles relevant to that course. So, Sage and Camelia worked on adding a feature to the Dashboard which lets you track the statistics only for a particular course such that the scope of revisions is limited to the assigned articles in that course.

It’s up and running now, to use this you have to change the course type to Article Scoped Program.

Difference between Visiting Scholarship and Article Scoped Program course types!

The Visiting Scholarship course type also has a similar feature for tracking stats based on the limited articles scope but there is significant difference between the purpose and use case of Visiting Scholarship course type and Article Scoped Program type.

The Visiting Scholarship course type is for the Experienced Wikipedians who don’t want the Dashboard to make any edits on Wikipedia(these edits basically say that the user has edited the Wikipedia being part of a Dashboard course) and Article Scoped Program course type has a wider scope, it’s for the Events where instructors want to track the progress made by only that course.

More coming up! lots more to share. Thanks for reading :)

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-08-11 14:37:05

The error code E0623 handles lifetime errors where both the regions are anonymous. The article talks about how a new error code E0624 was avoided when it came to handling structs, what I had initially started off with and what the end result was — a generalized message for anon_anon conflicts, emphasizing on code re-usability as well as simplicity when it comes to end users.

Let us extend E0623further in the case of structs.

Let’s start with an example.

struct Ref<'a> {
x: &'a u32,
fn foo(mut x: Vec<Ref>, y: Ref) {

And here is the final error message.

error[E0623]: lifetime mismatch
--> $DIR/
14 | fn foo(mut x: Vec<Ref>, y: Ref) {
| --- --- these two types are declared with different lifetimes...
15 | x.push(y);
| ^ ...but data from `y` flows into `x` here
error: aborting due to previous error

Design the error message.

The initial design was as follows and hence, we were planning to make a new error code corresponding to only *structs*.

Let’s recap how E0623 looks like.

11 | fn foo(x: &mut Vec<&u8>, y: &u8) {
| --- --- these two references are declared with different lifetimes...
12 | x.push(y);
| ^ ...but data from `y` flows into `x` here

error: aborting due to previous error

So here is how we started off — Design I

struct Ref<'a, 'b> { a: &'a u32, b: &'b u32 }
fn foo(mut x: Ref, y: &u32) {
              ---     - lifetime parameter must match
              2nd lifetime parameter on `Ref` must match
       x.b = y;

We wanted the error message to provide sufficient info to the user about the error and how to tackle it but it dawned on us that our efforts to create a perfect error message might not be well received by the end users as it would look like *too much information* and maybe E0623 — — explain could serve the purpose. Considering the fact that it might overwhelm the users with too much of information and the need to create a new error code, we decided to do something like this — Design II

+error[E0624]: lifetime mismatch
+ --> $DIR/
+ |
+14 | fn foo(mut x: Vec<Ref>, y: Ref) {
+ | --- --- this struct and reference are declared with different lifetimes...
+15 | x.push(y);
+ | ^ ...but data from `y` flows into `x` here
+error: aborting due to previous error

This made sure that expanding E0623 to handle the case would suffice. The final aim was to make a generalized error message ( neither too verbose nor too short) and so we decided to combine it with the existing error message of E0623 — Design III

+error[E0624]: lifetime mismatch
+ --> $DIR/
+ |
+14 | fn foo(mut x: Vec<Ref>, y: Ref) {
+ | --- --- these types are declared with different lifetimes...
+15 | x.push(y);
+ | ^ ...but data from `y` flows into `x` here
+error: aborting due to previous error

Coding it…

This step took me more time than usual, making me give a thought to time management, maybe because somewhere down, I got lost along the flow. But when it did work, there was this feeling of elation 💃.

Earlier, we were dealing with only references, in the compiler language, it was hir::TyRptr . Similarly, for structs, it is the hir::TyPath(QPath) . Now, whatever the QPath is, it doesn’t matter much to us.

// Corresponds to the NestedTypeVisitor we use to walk the
// anonymous types for e.g. `Vec<&u8>`
fn visit_ty(&mut self, arg: &'gcx hir::Ty) {
match arg.node {
hir::TyRptr(ref lifetime, _) => {...}
// Checks if it is of type `hir::TyPath` which corresponds to a struct.
hir::TyPath(_) => {...}

What do we do once we find the hir::TyPath(_) ?

            hir::TyPath(_) => {
let mut subvisitor = &mut TyPathVisitor {
infcx: self.infcx,
found_it: false,
bound_region: self.bound_region,
hir_map: self.hir_map,
intravisit::walk_ty(subvisitor, arg); // call walk_ty; as visit_ty is empty,
// this will visit only outermost type
if subvisitor.found_it {
self.is_struct = true;
self.found_type = Some(arg);

Since `walk_ty` visited only the outermost type, I made a call to intravisit::walk_ty(self, arg) where self corresponds to FindNestedTypeVisitor , to recursively search for innermost types ( a mistake I made in one of my earlier commits was to not include this, about which I will talk in a later post)

What is a TyPathVisitor ?

// The visitor captures the corresponding `hir::Ty` of the anonymous // region in the case of structs ie. `hir::TyPath`.
// This visitor would be invoked for each lifetime corresponding to // a struct, and would walk the types like Vec<Ref> in the above example and Ref looking for the HIR
// where that lifetime appears. This allows us to highlight the
// specific part of the type in the error message.
struct TyPathVisitor<'a, 'gcx: 'a + 'tcx, 'tcx: 'a> {
infcx: &'a InferCtxt<'a, 'gcx, 'tcx>,
hir_map: &'a hir::map::Map<'gcx>,
found_it: bool,
bound_region: ty::BoundRegion,

And a few minor changes to FindNestedTypeVisitor i.e. adding a new field is_struct .

// Indicates if the argument corresponds to a struct.
is_struct: bool,

A call to intravisit::walk_ty(subvisitor, arg) makes calls to various other default visitor functions and somewhere in the sequence lies visit_lifetimes() , which we override as follows.

let br_index = match self.bound_region {
ty::BrAnon(index) => index,
_ => return,

match self.infcx.tcx.named_region_map.defs.get(& {
Some(&rl::Region::LateBoundAnon(debuijn_index, anon_index)) => {
if debuijn_index.depth == 1 && anon_index == br_index {
self.found_it = true;
Some(&rl::Region::Static) |
Some(&rl::Region::EarlyBound(_, _)) |
Some(&rl::Region::LateBound(_, _)) |
Some(&rl::Region::Free(_, _)) |
None => {
debug!("no arg found");

The next few steps are the same as the ones to detect references.

if subvisitor.found_it {
self.is_struct = true; // it is a struct
self.found_type = Some(arg);

Now comes the part where we print the error message. I wrote a function process_anon_anon_conflict() which decides what to print in the error message based on the struct and reference combinations — refer Design II

// try to pre-process the errors, for the purpose of generating different
// span labels for the different combinations of the two parameters which can be
// either references or structs.
fn process_anon_anon_conflict(&self,
is_arg1_struct: bool,
is_arg2_struct: bool)
-> Option<(String)> {
let arg1_label = {
if is_arg1_struct && is_arg2_struct {
format!("these structs")
} else if is_arg1_struct && !is_arg2_struct {
format!("the struct and reference")
} else if !is_arg1_struct && is_arg2_struct {
format!("the reference and the struct")
} else {
format!("these references")

The decision was made to generalize the error message.

The is_struct field in the FindNestedTypeVisitor was no longer necessary. Similar was the case with process_anon_anon_conflicts() . The error generating code now looked like this.

let span_label_var1 = if let Some(simple_name) = anon_arg1.pat.simple_name() {
format!(" from `{}`", simple_name)
} else {

let span_label_var2 = if let Some(simple_name) = anon_arg2.pat.simple_name() {
format!(" into `{}`", simple_name)
} else {

(span_label_var1, span_label_var2)
} else {
return false;

struct_span_err!(self.tcx.sess, span, E0623, "lifetime mismatch")
format!("these two types are declared with different lifetimes..."))
.span_label(ty2.span, format!(""))
.span_label(span, format!("...but data{} flows{} here", label1, label2))
return true;

Neat and clean ❤️. Maintainable code.

More on this in upcoming articles.

Gauri P Kholkar | Stories by GeekyTwoShoes11 on Medium | 2017-08-11 07:47:10

Akademy 2017

Akademy 2017

This years akademy held in Almeria, Spain was a great success.
We ( the neon team ) have decided to move to using snappy container format for KDE applications in KDE Neon.
This will begin in the dev/unstable builds while we sort out the kinks and heavily test it. We still have some roadblocks to overcome, but hope to work with the snappy team to resolve them.
We have also begun the transition of moving Plasma Mobile CI over to Neon CI. So between mobile (arm), snap and debian packaging, we will be very busy!
I attended several BoFs that brought great new ideas for the KDE community.
I was able to chat with Kubuntu release manager ( Valorie Zimmerman ) and hope to work closer with the Kubuntu and Debian teams to reduce duplicate work. I feel this
is very important for all teams involved.

We had so many great talks, see some here:

Akademy is a perfect venue for KDE contributors to work face to face to tackle issues and create new ideas.
Please consider donating:

As usual, it was wonderful to see my KDE family again! See you all next year in Vienna!

Scarlett Clark | Home of Scarlett Gately Clark | 2017-08-10 19:57:24

WOW, it is been 13 weeks so far!
The thing with this program (for me, anyway) is that at the beginning I felt over my head, did not know what to do and where to start from. Everything seems to be scary and out of reach.
As all new things for me, when I have started to know and understand more and more – the scary part finished and the actual fun part started. Because I genuinely love to develop new features and get to know new projects and the way it is built.

So – this week I have returned to the back-end of the feature, in order to implement the final part (and the very core of it):
the delay mechanism.

It was the part which had the most debates on it (among my mentors and other teams members) because we wanted to make it generic as possible.
It includes changes in the data base, modification to the very core of the extension components and it is a very good thing that this task came up now because now I *know* those components and feeling very comfortable changing it and even touch it.

I think (waiting for the mentor’s code review) that I did it well and it sposed to be ready in a couple of days.


Ela Opper | FoxyBrown | 2017-08-10 05:57:32

I’m working on a toy project with a Northeastern student to get us both more experience with UX.

What is it?

The student I’m working with, Radhika Sundararaman, came up with the idea of a robot that would carry recycling bins from someone’s home to the dropoff point. This is most relevant for people who live in condo associations and other larger housing complexes with the dropoff for trash and recycling at some distance from their homes.


We expect the main users to be folks who are unable to walk, unsteady on their feet, or do not have the strength to carry a bin. We imagine this to be relevant for older adults and differenlty-abled folks.

In addition, we think there is a market here for folks who are too busy or too forgetful to get their trash out regularly.

What are we looking into?

Our focus for this project is the user interface.



We have done quick interviews of a few different people, and came up with 5 personas.

These include an older adult couple, a single mom with two kids who works full-time, another mom with three kids whose husband is typically away on business, an office manager, and one adult of three whose household often forgets to bring out the recycling.

While exploring the personas, we identified a need for a website, a mobile app, and buttons on the robot itself.

Each persona includes important information about the context of each person’s trash situation. Some of the more relevant points include:

How do they find out about changes in trash pickup?

  • Poster in the room with the mail
  • Voicemail
  • Email
  • Paper schedule mailed out every year

How far are they from the pickup location?

  • 500 feet
  • Up a large hill
  • On the curb next to the stairs leaving the house

How much trash do they usually create?

  • Minimal
  • Lots — small child still using diapers

What sorts of difficulties do they run into?

  • not home on trash dropoff day
  • too busy
  • Forgetful

Tasks & Scenarios

During our task and scenario analysis session, we decided that one of the 5 isn’t a user we will focus on at this time, and another was covered pretty well by the first three users. We used sticky notes and a handy empty wall to organize our thoughts and discussion.

We started the process by writing down what users would do in the case where there were no errors. We made notes of where errors might occur, and things we might like to include as options in the future.

Who are we talking about? What are the main points to keep in mind? How might they want to interact? What are the goals?
The actions that users might need to do, problems they might run into, and ways we might handle those problems. Also a short list of things to keep in mind for the future.

We translated those into step-by-step descriptions of how a user would do their ideal actions with our software. Finally, we investigated those situations where things didn’t work out quite right for one reason or another, and explored how that might translate to our software.

What are the situations for ‘things working as expected’? What do we need to have for settings and configurations?

Once we had gotten to a point where we felt ready to start sketching, we also translated our sticky notes to a digital format for greater ease of access and reference. We do not have a centralized location in an office building to leave them, so this is the next best thing.

What are we not doing?

We will not be creating the robot itself, as this project is meant to complete before Radhika returns to school in September. Neither of us has the technical expertise to focus on construction, and my experience with robots strongly suggests that this is not a simple problem to solve.

We are assuming a few things about the robot as part of our design process:

  • It will not be able to handle stairs
  • It has a weight limit on how much it can carry
  • It cannot pick things up

Next steps

Our next steps are to start sketching our ideas and discuss what we have each sketched. This will allow us to get onto the same page about our ideas, and come up with more effective and useful interfaces as a result of the exploration we will do.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-08-09 20:42:36

Clearing immigrations, currency exchange, jet lag, Canada here I am!!

Hello guys! A quick checkin from my side, I have been traveling for last 36hours, disconnected from the digital world- unaware of the mail updates. Finally, I reached Montreal for attending Wikimania’17(cheers!)
I am on my way to the Grey nuns residency at Concordia university- place I’ll be staying, hope the checking goes smoothly.

The event starts tomorrow, I’ll be participating in the Hackathon held on 9th and 10th. I have signed in for various activities like Volunteering, blogging in addition to working on the Dashboard project.
The main conference starts on 11th and I have gone through the programme schedule and have few talks that I want to attend. For other part of it I am not sure what I’ll be doing, depending on the guidance of my mentors on this one.

Well I have been questioned why do I have to come so far just for attending a conference!? By my dad mostly who is accompanying me on this journey. Well for me it’s something I had planned and not getting the scholarship was a bummer but I didn’t want to quit on my plan of meeting sage and Jonathan or on getting this experience!

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-08-09 18:28:04

What is Outreachy?

Outreachy is a wonderful initiative for women and people from groups underrepresented in free and open source software to get involved. If you are new to open source and searching for an internship that can boost up your confidence in open source, Outreachy would be a great start for you.

Outreachy interns work on a project for an organization under the supervision of a mentor, for 3 months. Various open source organizations(e.g. Mozilla, GNOME, Wikimedia, Linux kernel to name a few) take part in the Outreachy program. It is similar to the Google Summer of Code program, but a difference is that participation isn’t limited to just students.

Another major difference is that it happens twice a year. There are both summer and winter rounds. So you don’t have to wait for the entire year but you can start contributing anytime and prepare for the next round which would be some months later.

My involvement with Outreachy

Before Outreachy, I had done web and android development projects in my college but I was new to the huge world of open source. My first encounter with open source was in November last year and at that time Outreachy was nowhere in my mind.

I heard about the program in through a college senior who has earlier participated in Outreachy. I decided to participate in the coming round and started solving good-first-bugs.

The application period itself gave me a lot of confidence in my skills and work as a developer. I enjoyed it so much that I used to spend my whole day solving bugs here and there or just reading blogs about the program or the participating organizations.

Finally, there was the result day and I was selected for an internship at Mozilla for round 14.

I am currently working on Push Notifications for Signin Confirmation in Firefox Accounts. I am really enjoying my work. It is super exciting!!

Applying for Outreachy?

If you are planning to apply for the next round of Outreachy, here’s some advice that I can offer:

Start early

It is always better to know what is coming up. Try to explore as much as you can before the organizations and projects are announced. If you are a beginner, read about Outreachy, previously participated organizations, and start making contributions. You will learn a lot while contributing.

Chose your project/organization wisely

Once the organizations are announced you will be having about 50 projects(from different organization, programming languages, and fields) to choose from, this is great because you can start contributing to the project you are most interested in.

Explore all the projects and choose one which interests you the most and you feel motivated to work on that project for the next 3–4 months.

Ask Questions

Do not hesitate in asking questions even if you think that the question is silly because that one small question can be a block for many days, first search for the solution yourself but if it takes more than a day or two just ask. Outreachy respects the fact that you might be a beginner and everybody is going to respond to your query respectfully.

If it is an issue/project related doubt ask the mentors, otherwise for any Outreachy related query you can join the #outreachy channel on IRC.

Stay consistent

There can be days when you face block after block but stay motivated and don’t stop trying. Don’t get disheartened if your patches are not accepted in early stages. Eventually they would be. They just need a little more polishing. Keep going and one day you will get your PR merged!! :)

Be respectful

Always be respectful to your mentors and co participants while communicating. If you see that any fellow participant is stuck on a similar doubt and you feel that you can help, just share your knowledge even if he/she is your competitor. Getting selected is a goal but spreading knowledge and involving more people in open source is a bigger aim of Outreachy.

Know your project before submitting an application

You do not have to hurry about submitting a proposal. Get to know about your project, set up the platform, solve bugs and once you get comfortable with the code and platform then submit the application. This way you will have a better idea about the project and this will reflect from your application.

Don’t get disheartened and learn from the past mistakes

If you do not get selected for one round of Outreachy don’t be upset. Keep in mind that the next round is just a few months away and the chances of you getting selected in the next round will just get double if you keep contributing.

If you have any other query regarding Outreachy feel free to drop me an email at
Happy coding!!

Princi Vershwal | Stories by Princi Vershwal on Medium | 2017-08-08 19:47:15



This post is part of a series intended to provide those new to Bodhi with an introduction to some Bodhi basics. This post hopes to both introduce the life cycle of an update in Bodhi, as well as to outline the proposed addition of a ‘batched’ request intended to reduce update churn. If you’re a seasoned Bodhi user, you can jump to the Proposed revisions to an update’s life cycle section.


An update’s life cycle

First, a user creates a new update or edits an existing update. Updates have plenty of different attributes, but for the purpose of this brief introduction, I will only mention that the updates have a type, severitystatus, and request.

The update’s type (i.e., bugfix, security, newpackage, enhancement) classifies the update in relative terms of how it will help the end-user. The update’s severity (i.e., unspecified, urgent, high, medium, low) ranks the importance of getting the update to the end-user. Throughout their life cycleupdates flow through a series of requests and statuses. An update’s status is the current state of the update (i.e., pending, testing, stable, unpushed, obsolete, or processing). An update can request to enter a status (i.e., testing, obsolete, unpush, revoke, stable). If the update meets the requirements for the requested status, it enters that status upon the next mash run.


The update begins in the pending state and remains in pending until it requests to be pushed to the testing or stable repository. Once the update is testing, the update will be tested by other users and receive feedback in the form of comments and karma (i.e., -1, 0, +1). The update is eligible to be pushed to the stable repository when it meets the testing requirements (i.e., the update either meets the mandatory number of days in the testing state or reaches the minimum number of positive karma). Note: Additionally, the update can request to be “unpushed”, “revoked”, or “obsolete” from any state, but we will not cover these states here. 

Currently, after the update meets the testing requirements, the next logical step is requesting to enter the stable repository. With autokarma enabled, the update automatically reaches the stable repository upon meeting the testing requirements. With autokarma disabled, the user will be presented with a UI option to push the update to stable upon meeting the testing requirements.



The current update life cycle


Proposed revisions to an update’s life cycle

In response to requests to reduce update churn, we will add an additional path to the update’s life cycle. This path will essentially gather a batch of updates (namely, all of the updates that have been approved to be pushed to stable) and put this batch in a queue. This batch will request to be pushed to stable once a week. This should result in a smoother experience for end-users, as they will only receive a new batch of updates once a week, as opposed to once a day. This should slightly reduce the mash time six days of the week.



The new update life cycle

Instead of immediately requesting to be pushed to stable after meeting the update requirements, the update will instead request to be batched. If autokarma is enabled, the update will immediately be moved to a batched request upon meeting the update requirements. If autokarma is disabled, the user will be presented with a UI option to move the update to a batched request upon meeting the update requirements.

Once the update is sitting as a batched request, it will proceed as before (with the exception of waiting in a queue for up to seven days). If the packager should believe the update would benefit from an immediate request to stable, (s)he will now have the option to immediately request stable (and thus bypassing the batched request) via a UI option. Otherwise, once a week, a cron job will execute a script setting the request of all updates currently requesting batched to request stable.

If the update’s severity is urgent or if the update’s type is newpackage, the update will bypass the batched request and immediately request to be pushed to stable. Urgent updates need to reach end users as soon as possible, so holding them in a queue wouldn’t be beneficial. Additionally, newpackage updates already tend to have a long process time, and we do not want to further slow their process time.


The current PR can be found here. Feel free to leave any feedback on the PR or as a response to the email with the subject “Two more concrete ideas for what a once-yearly+update schedule would look like” on the Fedora dev mailing list.



Caleigh Runge-Hottman | Caleigh Runge-Hottman | 2017-08-07 21:05:44

As I mentioned in my last blog post, I am back with new exciting news. I am excited to share with you all that my team got accepted in Rails Girls Summer of Code program (also known as RGSOC), It’s been one month now.  We as a team has written a separate blog post for RGSoC, I am sharing the same link here, you can read about our journey of getting started and making it into the sponsored team there:)

RGSOC Blog- Team Serv0101collage-2017-07-21-2

Meet my team: Me-Rakhi(Up-Left), Mentor- Josh Matthews and me (Up-middle), Teammate- Neha Yadav (Right Corner), Coach- Rahul Sharma(Down- middle), Coach- Ravi Shankar(Down- left)

There are few people, I want to say thank you to:

Brian Birtles : Brian works on web animations and Firefox at Mozilla in Tokyo. He was the first person who told me about Servo project and Rust programming language. He suggested to contribute to the project as this is something amazing going on at Mozilla.

I was lucky enough to get a chance to meet Brian in-person at Mozilla in San Francisco all hands. He is really a nice person I ever met and of-cause an awesome developer.

I will add more names here soon 🙂

Rakhi Sharma | aka_atbrakhi | 2017-08-06 21:27:28

The Past

The need to connect virtually and have video conferences and communications on the web has been around for a while. In the past, Flash was one of the popular ways to achieve this. The alternate to this was plug-ins or an installable application on the PC. From a user’s perspective, all these methods required additional installations. From a developer’s perspective, they had to study complex stack and protocols.

The birth of WebRTC

WebRTC technology was first developed by Global IP Solutions (or GIPS), a company founded around 1999 in Sweden. In 2011 GIPS was acquired by Google and the W3C started to work on a standard for WebRTC. Since then Google and other major players in the web-browser market, such as Mozilla and Opera, have been showing great support for WebRTC.

Screen Shot 2017-08-06 at 12.12.30

The newly formed Chrome WebRTC team focused on open sourcing all the low level RTC components such as codecs and echo cancellation techniques. The team added an additional layer – a JavaScript API as an integration layer to web browsers. By combining these audio and video components with a JS interface, this spurred innovation in the RTC market.

A few lines of JS code and no licensing, integration of components or deep knowledge of RTC!

WebRTC – A Standard

WebRTC is a standard for real-time, plugin-free video, audio and data communication maintained by –

  • IETF – defines the formats and protocols used to communicate between browsers
  • W3C – defines the APIs that a Web application can use to control this communication

WebRTC is a standard that has different implementations

WebRTC is a standard that has different implementations, such as OpenWebRTC and The initial version of the OpenWebRTC implementation was developed internally at Ericsson Research. The latter is maintained by the Google Chrome team.

Cover image for this post is from here.

Princiya Marina Sequeira | P's Blog | 2017-08-06 13:37:13

For the past couple of weeks I've been focusing on changes that mainly benefit AppDB admins, specifically the management of maintainers.

One of the first problems I fixed when I began this internship was with the interface for managing maintainers. In addition to fixing the sort order to something that made sense to humans (the original order was by userId, and appeared random), I also added a column to display the last time the maintainer had logged into the AppDB. I suspected that we had many maintainers who hadn't logged in in a long time, and the actual number proved even worse than I had feared: 57% of the maintainerships belonged to a user who hadn't logged in in over two years. (Note the number is for maintainerships, not users, as one user may have multiple maintainerships.)

How did this happen? The function that notifies maintainers of new test reports in their queue also sends two  reminders, and removes the maintainership if they fail to process the queued reports after 7 days. That worked well for widely-used programs, but the AppDB has many entries for less-popular programs. Those were the maintainers falling through the cracks: since they weren't getting submitted test reports, and nothing was checking to see if they were submitting their own (which they are supposed to do), they were never removed.

About a month ago I removed the inactive user check, and in the process of studying that code, discovered that it was supposed to also remove maintainerships from users who had not logged in in six months. It never worked, because the check was only run against users who did not have data associated with their accounts, and being a maintainer was defined as having data. So the check was looking for maintainers in an array that had explicitly excluded them.

With all that in mind, I added a control to identify and delete maintainers who had not logged in in over 24 months to the admin control center. I eventually plan to add it to the daily cleanup cron script. The time period, IMO, is still overly generous; I selected it in part because of the large backlog of maintainerships that were going to be removed the first time it was run (3439 out of 6242), and in part because no time limit had ever been enforced before and I wanted to make it exceedingly difficult for anyone to argue that the definition of "inactive" was unreasonable. I may eventually shorten the time period to 18 or even 12 months.

Rosanne DiMesio | Notes from an Internship | 2017-08-06 08:06:18

At the end of this month, I am attending the Web Summer Camp at Rovinj, Croatia and I would be running a half-day workshop on 01.09.2017 about Web of Things – Peer to Peer Web.

Here is a little abstract about the workshop:

The web today is a growing universe. Over the years, web technologies have evolved to give web developers the ability to create new generations of useful web experiences. One such feature is WebRTC, which provides browsers and mobile applications with Real Time Communication (RTC) capabilities via simple JavaScript APIs. In this hands-on workshop you will learn to build applications to support real time communication on the web. You will build an app to get video and take snapshots with your webcam and share them peer-to-peer via WebRTC. Along the way, you’ll learn how to use the core WebRTC APIs and set up a messaging server using Node.

The focus of this workshop is hands-on coding exercises to build simple and fun WebRTC applications. WebRTC is a huge topic and explaining its technicalities + hands-on coding cannot be entirely covered in a 3-hour session. This blog post series is to aid the participants to know a bit more about WebRTC.

In this post I shall discuss about the title: Web of Things – Peer to Peer Web.

The Web of Things (WoT) is a term used to describe approaches, software architectural styles and programming patterns that allow real-world objects to be part of the World Wide Web. The Web of Things reuses existing and well-known web standards used in the programmable web (e.g., REST, HTTP, JSON), semantic web (e.g., JSON-LD, Microdata, etc.), the real-time web (e.g., Websockets) and the social web (e.g., oauth or social networks).

Peer to Peer Web is in the context of WebRTC which enables peer-to-peer audio, video, and data sharing between browsers (peers). Instead of relying on third-party plug-ins or proprietary software, WebRTC turns real-time communication into a standard feature that any web application can leverage via a simple JavaScript API.

WebRTC is P2P?

This is the traditional definition of the term peer to peer in the context of networks: Each computer acts as both the client and the server, communicating directly with the other computers.

A peer to peer network is often compared with a client server network and this is the obvious difference between the two: A client-server network involves multiple clients, or workstations, connecting to at least one central server. Most data and applications are installed on the server.

WebRTC enables peer to peer communication. But, WebRTC still needs servers!

  • For clients to exchange metadata to coordinate communication. This is called Signaling.
  • To cope with network address translators (NATs) and firewalls.

WebRTC is not only about a standard specification with a default implementation in browsers, but is also an open source media engine.


Princiya Marina Sequeira | P's Blog | 2017-08-05 20:50:09

Yayyy! For the first time we had a meeting in OpenLabs and a presentation dedicated to Debian.
In our hackerspace we have various GNU/Linux communities that are quite active such as Fedora,Redhat, OpenSuse and we were only missing Debian.

As soon as we got to know that Daniel Pocock (Debian Contributer and a big supporter of OpenLabs)  wanted to organize a presentation: “Intro to Debian”, we immediately proceeded with creating the event and promoting it.
Even though it was summer and  people usually go at the beach, a considerable number of people were highly interested to get to know how things really work in Debian and how their community is built. We were so happy to have someone who was willing  to share with us his experience in this community.
I was fascinated by the way that the community was build: the inner structure was very horizontal, and in my opinion it’s quite unique on it’s own way.

We also had the chance to listen that people can contribute in such a variety number of fields, for those who come from technical background who can contribute at coding and those who come from social science fields who can very well start building communities or any other activity that they feel like they do best. This meetup was very constructive and we are sure that this won’t be the last one, other Debian presentations will soon start in our hackerspace from new contributors.

Stay tuned!



Kristi Progri | Kristi Progri | 2017-08-05 00:00:46

Hey People, Hope you guys are doing awesome. It’s been so long I have not written any post. A lot has happened. I graduated two months back(June).  Got a chance to visit San Francisco, USA. It’s was my first trip aboard, gain lot of new experience, met new people and some awesome developers. Have to say a lot of things about my USA visit and of-cause about Mozilla all hands. I will make sure to write a separate blog post for that.

Also my team got accepted as RGSoC fellow intern. We are working on Servo Project will write about it soon. Keep eye on my site there are lot to come.

Recently I was working on a project. And I got confused about a term in JavaScript *Hoisting*. Though it is pretty simple nothing to confuse about. So, I thought to share a very simple way to understand Hoisting.

Let me start by writing a function:

function calculateMyAge(year) {
console.log(2016 - year);

Output: 20

I am sure this is understandable and too straight forward.

So Hoisting is nothing but a JavaScript’s default behavior of moving declarations to the top like this:

function calculateMyAge(year) {
console.log(2016 - year);

and this will work.

Output: 20

So here in the creation face of the execution context which is a global execution context in this case. The function declaration called at this edge is stored in a variable object(even before the code is executed) this is how Hoisting works. Isn’t it is pretty simple?

Let me know if you have any question 🙂

Rakhi Sharma | aka_atbrakhi | 2017-08-04 07:59:02

Have you ever had a bug in your code that turned you from a rational, reasonable human being to an angry, paranoid person? I’m talking about those times when you’re a 100% sure that your code is correct, so you start thinking there must be something very weird going on that causes it to crash. It gotta be something else! Maybe the virtual machine is broken, maybe it’s a compiler bug! When you’re working with hardware it’s even easier to find reasons. This wire doesn’t look really good, maybe I should replace it. Well, to keep programmers from wondering and wandering aimlessly, debugging tools have been invented. But debugging tools come with a warning. “Not to be used when you’re tired!” Haha!

Yesterday, I was working on a new feature for my driver and I managed to somehow cause a kernel oops. I double checked the code - it seemed fine (of course it did!). I started investigating the problem, I analyzed the stack trace – not very helpful on its own. I disassembled the code to find where it crashed and after victoriously finding the offset, I thought: no, it doesn’t make any sense, there’s nothing wrong with the code here. It seems I got somehow overpowered by a very well known human response: denial! So, I wasted some time testing again some other parts of the code, rereading some documentation and searching the internet for similar bugs. At that point, it got pretty late, so I decided I’d take a break and start again in the morning.

Today, I arrived at the workplace, turned on the computer and started recollecting my unsuccessful attempts at fixing the bug. As I sat at my desk, thoughts were spinning in my head (this time, the right kind of thoughts): What was that assembly code trying to tell me yesterday? Where was the code crashing? I launched vim and before the sound of my coffee pouring into the mug ceased, I knew exactly what was wrong! It took me several minutes to fix the problem that caused me to spent few hours on it, just the day before.

You imagine how angry I was at myself. Why didn’t I check more carefully the instruction that was causing the crash? Well there’s no point in blaming yourself, if you don’t do it publicly, so there you go! Now you know how this post came about!

Well, every cloud has its silver lining, every mistake gives you a lesson to learn. Today I’ve learned not to get stuck on the same problem for too long and not to debug when I’m tired.

P.S. I felt pretty miserable as I was failing to find the cause of the crashes the other day, so shout out to that guy who was having a hard-to-find bug as well and described his problem on a programmers’ website, starting his post with “Am I going crazy?”. You didn’t help me fix my problem, but you certainly made my day better!

Narcisa Vasile | | 2017-08-04 00:00:00

Hello everyone, this is my third blog post related to my Outreachy work.

For the past two weeks my updates are as follows:

  • Changed the set-timbre block to be of “flow” type i.e., it can be used inside other blocks.
  • Created a dummy “Source” block to test the voice samples.
  • Completed testing of the new Synth code over voice and drum samples.
  • Tested “Effects” inside the set-timbre block.
  • Created a PR including the changes relevant to the new synth code.

A quick demo including the above changes made so far  -


I faced a few difficulties while testing the Synth code specifically for the drum samples. When I was trying to play the drum by clicking on the set-drum block, I got the following error message every time “Tone.player: tried to start player before the buffer was loaded”.

After experimenting a bit with the set-drum block and digging up the source code of the Tonejs API, I found that it’s not possible to generate the Sampler on fly. Therefore I changed the code structure such that all the required samplers get generated at the time of initial loading of the Synth and stored it for future references.

Plan for the next couple of weeks:

  • Work on the “set-volume” method to make it specific to every instrument.
  • Integrate the UI components developed by the co-intern with the new Synth code.

Outreachy 2017 — 3 was originally published in Outreachy Diaries on Medium, where people are continuing the conversation by highlighting and responding to this story.

Prachi Agrawal | Stories by Prachi Agrawal on Medium | 2017-08-03 17:52:48

Wow, this week was all about what I love in my profession – FE on all of its layers.
I love making an efficient component, learn about how to combine it in the most right way for the current system, see other component references and take what is best for the new component needs.
I love to investigate “core” component and see what its behavior and what is its limitations.

So this week I have dealt with the Wikimedia FE convention and standards, improved and improved (and improved some more 🙂 ) the UI component that responsible of the user interaction of this feature.

I have got feedbacks from the tech-lead of the Echo team and got a chance to
get to know some more ways of doing many operations in the FE (@Wikimedia).

It was a fun week for me, hope to get this feature done soon and make it up and live in Wikimedia (well, beta first 🙂 ).

Ela Opper | FoxyBrown | 2017-08-03 10:58:00

My journey with Rust 😍

It’s been two months already! The discussion took off with a discussion on lifetime errors, deciding which one is more important than the other and selecting specific cases to deal with.

loop {

//Iteration I

The Perfect Error

The hunt to find the simplest and most basic error message to start with began. Considering various common lifetime errors, we decided to go ahead with the one above. We had discussions on the region inferencing in the Rust Compiler to understand anonymous regions(lifetimes), in particular the RegionResolution Error.

fn foo<'a>(x: &'a i32, y: &i32) -> &'a i32 {
    if x > y { x } else { y }

We picked up an example with one anonymous and one named region.

The Perfect Error Message

Finding the perfect example to start off wasn’t enough. A lot of time went in discussing the final looks of the error message. Lifetimes are a major reason why the beginners of Rust struggle at the start and our whole effort was directed at changing that experience.

fn foo<'a>(x: &'a i32, y: &i32) -> &'a i32 {
                       - consider changing the type of `y` to `&'a i32`
if x > y { x } else { y }
                      ^ this reference must have lifetime 'a

Write Code

Well, when you know what is the expected output, coding becomes a bit easier. This was the first time I had to deep dive into the region_inference related error_reporting modules in the Rust Compiler. Walking the hir map to search for the anonymous region and replacing it by the named region using Typefolders resulted in the above error message.

Write Better Code

The code review has been an empowering experience for me. I have previously worked on huge codebases for my internship but this was something new. As a beginner in open source, a community review structure has given me a broader perspective on the programming principles of Rust.

PR Merged ❤

Run the example here.

//Iteration II

The Next Perfect Example

E0623 considers the cases like above where both the regions are anonymous.

fn foo(x: &mut Vec<&u8>, y: &u8) {

The Next Perfect Error Message

11 | fn foo(x: &mut Vec<&u8>, y: &u8) {
| --- --- these references are not declared with the same lifetime...
12 | x.push(y);
| ^ ...but data from `y` flows into `x` here

error: aborting due to previous error

The hir::Ty is extracted for both the regions using a NestedTypeVisitor . NestedTypeVisitor.found_type gives us the required region i.e. &<u8> and Vec<&u8> for the case above. For the case of structs, we have the following error message( More on this is an upcoming post).

| fn foo(mut x: Vec<Ref<T>>,
| ---
| y: Ref<T>)
| --- these two structs are not declared with the same lifetime ...
| x.push(y);
| ^ ... but data flows from `y` into `x` here
| }
help: consider changing the signature to:
| fn foo<'a>(mut x: Vec<'a, Ref<T>>, y: Ref<'a, T>)

Write Code

The changes for the above error message have been merged. The changes for structs are not yet in PR phase.


And the cycle continues…

The changes for E0621 and E0623 have been successfully merged 😃.

Gauri P Kholkar | Stories by GeekyTwoShoes11 on Medium | 2017-08-01 19:20:51

Nftables has a very helpful command nft monitor, which shows changes to the ruleset in a native nft format, xml or json files.

Type nft monitor in a terminal.

Now, open another terminal and type the following.

Then go back to the first terminal.

Any change in the ruleset can be viewed with the help of monitor command.
There are commands to monitor specific event changes. Suppose we want to
check updates only for chains:
$ nft monitor chains

The syntax for monitor command:
$ nft monitor [new|destroy] [tables|chains|rules|sets|elements| ruleset] [xml|json]
Here events are tables, chains, rules, sets, elements and ruleset.
Refer to this patch, which adds event ruleset.

Also there is a command to trace ruleset. Check the wiki page to know more about it.
$ nft monitor trace

Varsha Rao | Varsha's Blog | 2017-08-01 19:06:48

While communication technology and collaboration software have significantly changed corporate decision-making, there remains one aspect of business that is impossible to replace: actual face time. As a result, mobility continues to be a critical growth driver for corporations and entrepreneurs seeking to increase their international market share — so much so, in fact, that global business travel spend is predicted to reach USD 1.6 trillion in 2020.

But, despite the world becoming ever more interdependent, there is still huge disparity in the levels of travel freedom enjoyed by citizens around the world. Needless to say, those who hail from countries with few visa-waiver agreements find time-consuming visa-application processes a serious hindrance to transnational mobility. A second passport, then, offers an effective means of increasing travel freedom and accessing opportunity in the world’s largest consumer markets.

In fact, for the thousands of wealthy individuals and their families who wish to operate globally, reduce their exposure to risk, diversify their investments, increase their international flexibility and open up new opportunities for growth, an alternative residence or citizenship is considered an essential resource.

Travel on Demand

St. Lucia’s citizenship-by-investment programme offers one of the most efficient and affordable routes to accessing the world’s foremost business destinations. The island nation grants its passport holders visa-free access to 127 countries, including the UK, the Schengen area, Hong Kong and Singapore. The investment costs are reasonable, processing is swift, and citizenship can be passed on to future generations by descent.

For those looking to Europe for passport diversification, Malta’s citizenship-by-investment programme –– the only one to be endorsed by the EU –– is the most popular. Ideally located between Southern Europe, North Africa and the Middle East, the country offers its citizens the freedom of unrestricted travel to no less than 167 countries. It also places one of the lowest tax burdens on residents, with the system combining corporate taxation with favourable tax credit incentives. In addition, Maltese citizenship confers the right to live, work and study in any of the 28 EU countries as well as Switzerland — effectively providing 29 European-based tax-residence options.

Broaden Your Horizons

Citizenship-by-investment programmes like those of St. Lucia and Malta provide a mutually beneficial solution that meets the needs of governments as well as a generation of ever-mobile global citizens. In 2014, global residence and citizenship planning firm, Henley & Partners, was mandated by the Government of Malta to design and implement the country’s citizenship-by-investment programme. This year, the firm opened an office in St. Lucia, its fourth in the Caribbean. As a result, Henley & Partners is uniquely positioned to provide in-depth advice and support on citizenship-by-investment in both countries, among numerous others.

As the world becomes ever more globalised and people live and conduct business on a progressively more international scale, the freedom to move within the global market provides a decided edge. So, while property, pension funds and private equities still have their place, increasingly, the most valuable asset in the portfolios of global investors is a second citizenship.

Roxana Necula | Tuxilina | 2017-08-01 12:29:55

The UK’s Serious Fraud Office has started a formal investigation into British American Tobacco over corruption allegations in Africa.

The SFO said it was “investigating suspicions of corruption in the conduct of business by BAT, its subsidiaries and associated persons”.

The UK tobacco firm said in a separate statement that it was investigating allegations of “misconduct” in its business through external legal advisers and that it had been cooperating with the SFO before the formal investigation was opened. The group said it would cooperate with the SFO.

BAT would not confirm whether the investigation concerned allegations made by BAT whistleblower Paul Hopkins, who worked for the firm in Kenya for 13 years. He has claimed he paid bribes on the company’s behalf to the Kenya Revenue Authority for access to information that BAT could use against its Kenyan competitor, Mastermind. Hopkins also claimed BAT Kenya paid bribes to government officials in Burundi, Rwanda and the Comoros Islands to undermine tobacco control regulations.

BAT hired law firm Linklaters in February 2016 to conduct a “full investigation” into the claims made by Hopkins. BAT later dropped Linklaters and appointed Slaughter and May as sole adviser on the case.

BAT and other multinational tobacco companies have threatened governments in at least eight countries in Africa in an attempt to block anti-smoking laws, a Guardian investigation found last month.

The SFO investigation is another blow to the company, after the US health watchdog, the Food and Drug Administration, announced tighter regulations on the tobacco industry, which hit shares in tobacco firms.

BAT has recently completed its acquisition of US firm Reynolds, which will create the world’s biggest tobacco company.

The BAT chief executive, Nicandro Durante, told the Financial Times in March that the firm would “act in a very strong way” if wrongdoing was found to have taken place. He said: “We are in 200 countries, so I cannot give a 100% guarantee that everything’s going to go by the book.”

Roxana Necula | Tuxilina | 2017-08-01 12:19:02

The eurozone notched up growth of 0.6% in the second quarter of the year, official Eurostat figures showed.

The figure puts annual growth in the 19-country bloc at 2.1% since a year ago.

First-quarter growth was revised down slightly from 0.6% to 0.5%.

Other figures released on Monday showed unemployment in the zone was at its lowest since 2009, building on the picture of improving economic health across the area.

On Friday, figures showed Spain’s economy, one of the worst-hit by the financial crisis, grew by 0.9% in the second quarter, suggesting the country’s economy had finally grown back to the size it was before 2008.

The International Monetary Fund last week said the outlook for several eurozone economies was brighter than initially thought, with countries including France, Germany, Italy and Spain seeing growth forecasts revised up.

The European Central Bank is planning to tighten up monetary policy after years of pumping up activity through low interest rates and bond-buying.

It intends to begin the process in the autumn, although inflation remains low at 1.3%, well under the 2% target for the eurozone.

Low inflation is often one of the side effects of weak economic activity.

Roxana Necula | Tuxilina | 2017-08-01 12:18:02

Investors may be in for disappointing market returns in the decade to come with valuations at levels this high, if history is any indication.

Analysts at Goldman Sachs pointed out that annualized returns on the S&P 500 10 years out were in the single digits or negative 99 percent of the time when starting with valuations at current levels.

In a chart, they point out that the S&P’s cyclically adjusted price-to-earnings ratio (CAPE) is currently around its highest historical levels. CAPE is a widely followed valuation metric developed by Nobel Prize winners John Campbell and Robert Shiller.

The chart shows how the S&P 500’s 10-year-out returns are mostly below 10 percent or negative when CAPE is around historical highs.

Here’s the chart:

Source: Goldman Sachs Asset Management

“In light of high equity valuations and the potential for lower returns, we see the case for a fresh look at alternative strategies” such as international small-cap stocks, Goldman said in its third-quarter outlook containing this chart.

“International small caps may be well positioned to benefit from the global economic expansion,” they said. “International small caps’ exposure is 70% cyclical by sector and includes a relatively high degree (55%) of domestic revenue drivers. These domestically-oriented companies in the past have also been better positioned than international large caps during periods when broad international equity markets have outperformed the US.”

The S&P 500 has had a banner year thus far, advancing 10.4 percent and posting record highs. However, the index’s sharp rise has raised concern that stocks may be too expensive.

Despite this dire historical data, many strategists will tell you not to drastically change your long-term asset allocation philosophy based on valuations. Plus, a single-digit return isn’t so bad, considering the potential returns for other asset classes out there.

“The most important thing with respect to the CAPE ratio is that it should be used to set expectations, rather than to time the market,” Michael Batnick, director of research at Ritholtz Wealth Management, told CNBC via email. “If you pay more for an investment, you should expect to receive less in return.”

“But jumping in and out of the market because stocks appear expensive, is a very difficult way to invest,” said Batnick, who tweeted out the Goldman chart on Sunday.

Batnick also noted CAPE has spiked 38 percent since June 2014. In that time period, the S&P has gained 25.3 percent.

Roxana Necula | Tuxilina | 2017-08-01 12:16:50

Growth in China’s manufacturing quickened in July, with output and new orders rising at the fastest pace since February on strong export sales.

But even as firms boosted purchasing in anticipation of more business, employment levels at factories fell at the fastest pace in 10 months and a reading on business outlook was the lowest since last August – a sign economic momentum may start to ebb in the months ahead.

The Caixin/Markit Manufacturing Purchasing Managers’ Index (PMI) rose to 51.1 in July, above the 50-point mark separating growth from contraction and well ahead of the 50.4 in June.

A resurgent export sector underpinned by a brightening global economy helped China post surprisingly strong gross domestic product growth of 6.9 per cent in the first half of the year.

The Caixin readings diverged from an official PMI survey released on Monday which showed growth in China’s manufacturing sector cooled slightly last month, with export demand slackening.

Divergence in the two indexes is usually a result of the Caixin PMI’s smaller sample size rather than anything fundamental to China’s economy, said Jonas Short, who heads the Beijing office at investment bank Sun Hung Kai Financial (SHKF).

The Caixin new export orders reading came in at 53.5 in July, up from 50.9 June and the highest since February.

Despite mixed signals, analysts are still generally optimistic about the outlook for China’s exports, even if there is a slight dip in July.

“We are not that worried about the export outlook for China in the second half,” said ANZ senior China economist Betty Wang.

While China’s foreign trade faces a mostly positive environment in the second half of the year, uncertainties still exist, Vice Commerce Minister Qian Keming said in Beijing on Monday.

Roxana Necula | Tuxilina | 2017-08-01 12:15:03

Coming from the one country that exclusively uses the imperial system, there’s a lot I take for granted about the way I measure things. In metric, 1.1 kg would make perfect sense, but in the imperial system, 1.1 lbs may look fine but how do you weigh that out. I’m someone with a lot of cooking and baking experience, but I’m not someone for whom mathematical knowledge comes naturally. I know off the top of my head that a pound is 16 ounces, so that’s easy, but what is .1 lbs in ounces. It isn’t quick math, for me anyway. It’s 1.6 oz, but I only know that from googling it. So to deal with this I created a function that converts things like 1.2 lbs to “1 pound 1.6 ounces”

const char **
multiple_units (double amount, GrUnit unit)
double amount1 = amount;
double fractional, integer;
const char result[256];

fractional = modf(amount1, &integer);

if (unit != NULL) {
if (fractional > 0)
double amount2 = fractional;
GrUnit unit2 = unit;
human_readable(&amount2, &unit2);

if (unit != unit2)
amount1 = integer;

char *number1 = gr_number_format(amount1);
char **name1 = gr_unit_get_display_name(unit);
char *number2 = gr_number_format(amount2);
char **name2 = gr_unit_get_display_name(unit2);

snprintf(result, sizeof result, "%s %s, %s %s", number1, name1, number2, name2);
char *number1 = gr_number_format(amount1);
char **name1 = gr_unit_get_display_name(unit);
snprintf(result, sizeof result, "%s %s", number1, name1);

char *number1 = gr_number_format(amount1);
char **name1 = gr_unit_get_display_name(unit);

snprintf(result, sizeof result, "%s %s", number1, name1);

char *number1 = gr_number_format(amount1);
snprintf(result, sizeof result, "%s", number1);

g_message("result is %s", result);

char **result1 = result;

return **result1;

Basically it splits the amount into the int and the fractional part of the number, runs the fractional part of the number through my human_readability function which determines whether a unit is the ideal one for human readability (1000000 grams probably makes more sense in terms of kg, etc), if that function changes the unit, the function outputs the int as one unit and the fractional part, now its own int, as a second unit, otherwise it spits out just the amount and the unit as a string, formatted and ready for display.

Paxana Amanda Xander | | 2017-07-31 20:36:26

I had participated in Outreachy’16 December Internship round and got a chance to work under Wikimedia Foundation on Wiki Education Foundation’s Dashboard project. I got a chance to work on the same project as a GSoC intern, luckily under the guidance of the same mentors Sage Ross and Jonathan Morgan. Outreachy provides a total of $500 as a travel allowance to every intern for attending open source conferences. After discussing with my mentors and knowing that I will get a chance to meet them, I decided to attend Wikimania’17 — Wikimedia Foundation’s annual conference held in Canada this year from August 9–13, 2017. This week I started focusing on my Canada Visa Application and making all the required bookings.

It was quite a hectic procedure to complete all the forms and submit all the documents but I was able to complete the application on time(yay!:)

On the Dashboard, I worked on one of the issues detected in the previous user testing session. The detected Issue was that the user was not able to understand the working of Submitted Input Field under the Course Edit Details section. We solved this by placing an info tooltip icon next to the field, which helps users understand the significance of it. Link to the commit.

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-07-31 14:18:42

Here are two quick updates about Lightbeam because I can’t contain the excitement to myself.

Lightbeam goes responsive, yayyyyy!!!

Responsive Lightbeam

Responsive UI

I am extremely happy for achieving this today. Making the UI responsive was there in our to-dos, but this one got done today accidentally in an attempt to answer few of the comments on one of my PR. CSS grid is used and I must say the fr unit is so handy.

Mozfest proposal submission

Proposed a session for the Mozilla Festival 2017, London. Here is the proposal. The coolest part here is that the google-form submission automatically gets created as a GH issue.

Princiya Marina Sequeira | P's Blog | 2017-07-31 13:12:33

I’m glad to read that you’re in a good place now. All the best for your future. :)

Prathiksha G | Stories by Prathiksha G Prasad on Medium | 2017-07-29 16:41:07

This July, I got the opportunity to attend and speak at the biggest Python conference in Europe. The venue was the marvellous conference centre in Rimini, Italy. The conference took place from 9th July to 16th July.

What I spoke about?

I spoke about “Using python and microservices to fuel WebPush at Mozilla” during the conference. This was my Outreachy project — developing a push server for Kinto.

I talked about how WebPush works and about all the players involved and their roles. I also talked about how me and my mentor developed the service. The stage fear and the fright was there but it went away the moment I got in the flow of speaking.

The conference

I also got to attend some amazing talks which just made me sit in my seat in awe and wonder how many amazing things are being done all around the world. There were talks about AI at Amazon, how deep learning models are served at, asynchronous programming in python, OpenAPI development, Django and GraphQL and many more amazing talks.

I learnt a lottt! I attended talks which were about totally new technologies and some which built onto my existing knowledge. Also, interacting with some people about where they work and what they work on, gave me some insight into their work and the technologies they use.

There were social events, lunch and snacks during the conference. So, there wasn’t just learning, each one of the atendees had a lot of fun too.


Outreachy has been a life changing experience. It has made me not only a more technically strong person but also made me strong mentally. The experience has helped me overcome imposter syndrome.

The next round of Outreachy will take place from Dec-March. Do participate! And feel free to reach out to me if you need any help.

All the best!

Mansimar Kaur | Stories by Mansimar Kaur on Medium | 2017-07-29 16:07:45

Prior to applying for Outreachy, one of the points that drew me to Fedora’s Bodhi project was the proposed idea of bash completion for the Bodhi command line interface (CLI). For those unfamiliar with bash completion, it is the phenomenon that occurs when you begin to type a command into your Unix shell (in this case: bash), press tab, and bash either (1) autocompletes the command for you or (2) provides a list of all of the possible ways to complete that command. During my undergrad CS studies, my favorite projects entailed creating Unix shells, command line chat servers, and text-based command line games. All of these projects have a clear common denominator: the command line. I found that I would spend hours upon hours blissfully trying to perfect my CLI’s user interface. Thus, this seemed to be the Outreachy project of my dreams!

To implement Bodhi’s CLI, Bodhi uses Click, which is a cool Python package used to create CLIs. If you aren’t very familiar with Bodhi, the majority of the code is written in Python. Prior to my internship, I was not well-versed in Python let alone Python libraries or Python packages (I had only ever used it to write small scripts, test algorithms, and create servers).

Because of the nifty framework Click, the bash completion for Bodhi proved simpler (far simpler) than I anticipated. Click actually already supports a certain level of bash completion for its CLIs. All that was required was to generate the activation script and to get around the Python errors. Bodhi now ships with the bash-complete activation script, so all you need to do is press tab! The PR enabling bash completion can be found here.


What does Bash completion look like within the Bodhi CLI?

Screenshot from 2017-07-28 04-42-35

Bash completion supports the following commands:

  • bodhi
    • updates
      • subcommands: comment, download, edit, new, query, request
        • each subcommand’s respective parameters (too many to list)
    • overrides
      • subcommands: edit, query, save
        • each subcommand’s respective parameters (too many to list)


Enabling Bash Completion

It is enabled by default. Just type the following* into your Vagrant instance’s shell:

bodhi <TAB><TAB>

*Replace “<TAB><TAB>” with two quick taps of the tab key ;). Oh, and yes — there’s a much-needed space in between bodhi and <TAB>!


Uh oh! Why am I seeing loads of warnings?

For bash completion to work properly (i.e., without various Python errors printing to your screen each time you press the tab key), you will need to ignore Python warnings. Otherwise, your output will look like this:

Screenshot from 2017-07-28 04-22-51


Ignoring Python warnings

Python warnings tend to be helpful in a dev environment, so proceed at your own risk of future frustration. You can either ignore all Python warnings, or you can temporarily ignore Python warnings while using bash completion.

To ignore all Python warnings, the user could change the following line in their .bashrc:

export PYTHONWARNINGS="once"


export PYTHONWARNINGS="ignore"

Again, as a dev, you probably don’t want to ignore Python warnings everywhere. It may be a good idea to only ignore the warnings within the Bodhi CLI. If you’d still like to see the Python warnings in outside of the Bodhi CLI, you can use the following (somewhat hacky) solution by editing the file:

Screenshot from 2017-07-28 05-17-19


Scalability and future improvements

Earlier I mentioned that Click only supports a certain level of bash completion. Click has yet to implement the ability of dynamic completion. Thus, because Bodhi relies on Click for its CLI and thus bash completion, Bodhi also has yet to implement the ability of dynamic completion. This feature would look like this:

bodhi updates new bodhi-2.7.0-<TAB>

where the auto complete options would result in the following

bodhi updates new bodhi-2.7.0-
fc25 fc26

Though, for Bodhi to use dynamic completion, it would require changes to upstream Click. Unfortunately, I cannot find evidence that Click has plans or desire to add this feature anytime soon.


Reflection and lessons learned

After learning how simple it is to create CLIs with Click, I am inspired to port some of my favorite projects (originally written in C) to Python. Although I am still discovering many of its nuances, Python is quickly becoming my favorite programming language. I still have some leftover habits of writing Python as though it’s C and thus may occasionally find myself not realizing a simpler way of approaching a certain line of code, but that’s part of what makes learning Python great: consistently finding yourself shocked at its simplicity. And lucky for me — I have a very helpful mentor!

Keep in mind:

  1. There may be a simpler way of adding a new feature than you initially imagined, so before coding, be sure to search for external packages, tools, and libraries — and most importantly: check with your mentor!
  2. An Outreachy project may not be exactly what you anticipate — and that’s okay! — so it may be a good idea to vest interest in more than one project idea.
  3. You don’t have to be extremely well-versed in an open source project’s main language (though it will certainly save you quite a bit of time)
  4. If you want a feature in open source project A which depends on open source project B, it may require contributing to open source project B, which could cause hiccups.


Many thanks to my very helpful, patient, and encouraging mentor, bowlofeggs!

Caleigh Runge-Hottman | Caleigh Runge-Hottman | 2017-07-28 12:19:59

SVG is the preferred choice for D3. But when you expect a lot of DOM nodes (yes, in the case of Lightbeam) you need to start worrying about the DOM performance and have to take the call to step out of the SVG comfort zone. We chose Canvas over SVG for Lightbeam 2.0!

In this post, I would like to highlight the key points of drawing and interactivity on the HTML5 Canvas element.



In our case, we followed the first approach – using D3 solely for its functional purpose (D3’s force layout algorithm) and then drawing onto the canvas.

D3 joins

Normally, when you follow this approach, you are trading off D3’s super rich data binding and joining functionality. It means you’re only drawing your graph once – you’re not expecting new data that would require a graph redraw or update. This is why D3 with some dummy HTML nodes is a preferred way in order to retain the data binds and joins. D3’s joins are a way to dynamically update the graph without having to redraw the whole thing over and over again.

Screen Shot 2017-07-28 at 11.51.35
Typical example of enter, update and exit methods in D3 joins. Link

The dummy HTML element approach

Here is an example of using custom aka dummy HTML elements to render D3 on canvas.

Screen Shot 2017-07-28 at 11.59.52
custom elements

custom is definitely not a standard DOM element type, so it will not be rendered in any way and will live only in memory (virtual DOM). It is used as a container for other dummy nodes.

How does Lightbeam update dynamically without D3 joins?

Even though we followed the first approach, Lightbeam has dynamic updates. Luckily, D3’s force layout algorithm takes care of the new node and link updates/additions and we managed to use D3 only for its functionality without the dummy element approach. Here is the PR.

Interactivity on the canvas

Canvas is a single DOM element and mouse interactions on the canvas can be sometimes tricky because you don’t have independent access to the nodes and links (or any graph element). In our case, we have the following interactions:

  • On hover over the nodes (websites) show tooltips with the name of the website
  • Drag the graph
  • Zoom in and zoom out the graph
  • Panning – shifting the view of the graph

At the time of writing this blog post, only tooltip based interactivity is achieved. PR

I shall explain tooltip based canvas interactivity in the next post.


Initial performance results

I have never done performance testing using browser devtools. When we migrated from SVG to Canvas I was curious to know the performance results of using canvas. For a given small sample of websites, here are the test results. Canvas has a rendering score of 4.3 milliseconds 🙂

Old Lightbeam (SVG)
New Lightbeam (SVG)
New Lightbeam (Canvas)

Princiya Marina Sequeira | P's Blog | 2017-07-28 11:15:21

I continued to do the work about the migration of the Plinth. I summarized the packages which are needed by Plinth and their alternates in Fedora.

Sometimes there will be a little different in details between the packages which have the similar name in Fedora and Debian, I’ll check them one by one and find better solutions.

About libjs-bootstrap and libjs-modernizr, I can’t find a suitable alternate, so I extracted the .deb package in Fedora and put them under the javascript/.

As you know, most foreign websites are blocked in the Chinese mainland, which include all of the Google services, many frequently-used IM software, Wikipedia and so on, so we have to use VPN or other ways to connect to the servers located abroad to load these websites. But lots of VPN (include mine) in China are blocked unexpectedly in July because of some political reasons, I can’t update my blogs and codes until I find a new way to “over the wall”, so my work is delayed this moth.

And because my mentor also lives in China and he refuses to use the non-free software in China, we use gmail and telegram to communicate in normal times, so I losed contact with my mentor for many days. As soon as we are able to contact with each other in the last few days, I told him the work I did this month and the question I met. For this case, we are thinking about scaling down our project temporarily and delaying some unimportant work, I will puts forward concrete solving plans with my mentor as soon as possible.

Mandy Wang | English WoCa, WoGoo | 2017-07-27 16:31:06

Some take-home points from the talk by Amy Rich, Manager of Release Engineering Operations at Mozilla Corporation.

The one hour talk started off with Amy explaining about her educational background, her journey as a Mechanical Aeronautical Engineer. She stressed that even the smallest things matter in the long run. Everything that she had picked up in high school and college and her previous jobs have contributed a lot to her career growth, technical as well as the business side. Keep questioning yourself, “What’s next?”. Find mentors to discuss your life goals with. Build a network. Open up doors.

Pick up other skills apart from *technical skills*. Soft skills and people management skills are a welcome bonus :). She also highlighted the diversity problem in the tech world, highlighting the women in tech has lost it’s flavor as women are no longer looked up to for being awesome engineers and programmers but for being women programmers and engineers. Combating with the typical *geek* stereotype means putting in extra efforts for acceptance. Hopefully, with more and more involvement of women in tech, this too come to an end.

We narrated our struggles in the tech world. Right from how people react when you introduce yourself as a software developer to putting up with the gender ratio in engineering grad colleges to the constant need to prove and improve yourself, it felt like familiar grounds. Sometimes, it means a lot to know that you are not the only one.

An awesome learning experience.

Gauri P Kholkar | Stories by GeekyTwoShoes11 on Medium | 2017-07-26 20:14:29