This week I have enriched the API for the feature.

It was difficult because I was not used to the language at all, nor the project logic or structure.

I have managed to get along with the task, after lots of investigation and lookups I came across a guessing solution for the time delay necessity.

I still got build fail over some tests I hardly understand but will keep looking for solutions until I will make it green again 🙂


Ela Opper | FoxyBrown | 2017-06-22 13:22:56

Adding visualizations, ES6 destructuring, Code refactoring, User testing…

Adding visualizations to the User Profile Pages.

Adding Visualizations for the Total Courses/Programs and total Students/Editors taught by an user to the User Profile Pages, using Vega specification.

  • Visualization for the Programs Taught by an Instructor — Programs are distributed on the time scale to show the period in which user was active as an Instructor.
  • Visualization for the Editors taught by an Instructor — Number of editors distributed on the Y scale and plotting the time when they joined the courses taught by the user. Helping user to see the increment in the editors count taught by him.

Code refactoring using ES6 destructuring syntax.

Refactoring a React component into a function. Sage explained it to me in simple words saying — “The idea is that, since some of the React Components strictly take props and render HTML, we can write it as just a plain function of its props that returns the JSX, which otherwise render() would return. The other nice thing is the destructuring of props in the function’s arguments, So we can just call them by the prop name, rather than using this.props.something everywhere in the component”

Link to the Commit made for refactoring the ByStudentsStats component and StudentStats component Using ES6 Destructuring syntax.

We are using arrow functions in the code.

Basic Syntax —

func = (param1, param2, …, paramN) => { return expression;}

Advance Syntax —

func = ({param1, param2, …, paramN}) => { return expression;}
// Parenthesize the body of function to return an object literal expression

This Syntax is used for the Main function which accepts the object and assigns properties of an object to variables of the same name. Destructuring can be applied to function arguments that are objects or arrays, that’s why we use the advance syntax.

User testing session with Shani Evenstein.

In User testing session, we ask the user to do a task on the dashboard so that we can see how he/she uses it and talk about anything that is confusing about it.

Shani uses the dashboard as an instructor and programs leader. She had some feedback about the Dashboard, I’ll mention few features which Shani suggested we should add to the Dashboard —

  • For an instructor or admin: Finding out which user is still active after the course has ended, Basically a way to track users.
  • Able to fetch old data from the Education Extension into the Dashboard, but Sage mentioned that it is not feasible.
  • Include page views of old programs for historic courses, while calculating statistics for the page views of an article — currently it displays zero page views, so it seems that some courses didn’t have an impact.

Shani helped us comprehend the user understanding of the Dashboard and it helped me prioritize the tasks to be completed. I am yet to have a in depth discussion about this with Sage.

Week 3 of Coding has begun for which I have submitted few patches and now working on the suggestions given by Shani, I’ll update you guys of the further enhancements made to the Programs and Events Dashboard in my next blog post. Stay tuned!!

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-06-22 12:57:44

An introduction to the new Rust error code E0611.

Given a simple function foo

fn foo<'a>(x: &i32, y: &'a i32) -> &'a i32 {
if x > y { x } else { y }
}

threw up an error corresponding to Error Code E0312 .

    error[
    
      E0312
    
    ]: lifetime of reference outlives lifetime of borrowed content...
  

But with E0611, this is how the new error message looks.

    error[E0611]: explicit lifetime required in the type of `x`
  

The What?

E0611 handles the case of lifetime errors where

  1. It is a Function Declaration or a Trait. Closures and Trait impls will be taken care of later
  2. One of the argument has a lifetime parameter.
  3. The other argument has a missing lifetime parameter.
  4. Error is of type RegionResolutionErrors and the specific type of ConcreteFailure.

The Why?

What’s E0312 for ?

As stated in the diagnostic.rs file

E0312: r##"A lifetime of reference outlives lifetime of borrowed content.
Erroneous code example:
```compile_fail,E0312
fn make_child<'human, 'elve>(x: &mut &'human isize, y: &mut &'elve isize){
*x = *y; // error: lifetime of reference outlives lifetime of borrowed content}

The compiler cannot determine if the `human` lifetime will live long enough to keep up on the elve one. To solve this error, you have to give an explicit lifetime hierarchy:

    To solve this error, you have to give an explicit lifetime hierarchy:
  
fn make_child<'human, 'elve: 'human>(x: &mut &'human isize,                                     y: &mut &'elve isize) {    
*x = *y; // ok!
}
    Or use the same lifetime for every variable:
  
fn make_child<'elve>(x: &mut &'elve isize, y: &mut &'elve isize) {    
*x = *y; // ok!
}

Why E0611?

Since the main idea is modifying error messages for existing lifetime errors based on the different cases instead of just having a single generalised error code for all of them piled up together, we decided to introduce a new error code E0611.

Here’s more on how to introduce a new error code .

error[E0611]: explicit lifetime required in the type of `y`
--> $DIR/ex1-return-one-existing-name-if-else.rs:12:27
|
11 | fn foo<'a>(x: &'a i32, y: &i32) -> &'a i32 {
| - consider changing the type of `y` to `&'a i32`
12 | if x > y { x } else { y }
| ^ lifetime `'a` required
error: aborting due to previous error(s)

The How?

1. Designing the message

Clearly, this is the more interesting and fun phase of all the three. Once we were clear on what kind of lifetime errors we will be tackling, the next thing to do was to design the error message.

struct_span_err!(self.tcx.sess, var.pat.span, E0611, “explicit lifetime required in the type of `{}`”, simple_name)

Design 1

error[E0611]: lifetime mismatch
--> $DIR/ex1-return-one-existing-name-if-else.rs:11:24
|
11 | fn foo<'a>(x: &'a i32, y: &i32) -> &'a i32 {
| ^ consider changing the type of `y` to `&'a i32`
12 | if x > y { x } else { y }
| - lifetime `'a` required

Design 2

error[E0611]: lifetime 'a required
--> $DIR/ex1-return-one-existing-name-if-else.rs:11:24
|
11 | fn foo<'a>(x: &'a i32, y: &i32) -> &'a i32 {
| ^ consider changing the type of `y` to `&'a i32`
12 | if x > y { x } else { y }
| - lifetime `'a` required

Final Design

error[E0611]: explicit lifetime required in the type of `y`
--> $DIR/ex1-return-one-existing-name-if-else.rs:11:24
|
11 | fn foo<'a>(x: &'a i32, y: &i32) -> &'a i32 {
| - consider changing the type of `y` to `&'a i32`
12 | if x > y { x } else { y }
| ^ lifetime `'a` required
error: aborting due to previous error(s)

2. Writing the code

Looking through debug logs, I understood the type of error being dealt with was of type of RegionResolutionError in particular, ConcreteFailure which is defined here. The words lifetimes and regions are used interchangeably.

This is how the error variable looks for our fn foo. Let’s understand it’s components.

ConcreteFailure(Reborrow(p1.rs:3:27: 3:28), ReFree(CodeExtent(4/CallSiteScope { fn_id: NodeId(4), body_id: NodeId(36) }), BrNamed(CrateNum(0):DefIndex(2147483648), 'a(83))), ReFree(CodeExtent(4/CallSiteScope { fn_id: NodeId(4), body_id: NodeId(36) }), BrAnon(0)))

The first Refree(FreeRegion) corresponds to the . The FreeRegion consists of DefId and a BoundRegion. The function arguments though bound in the function body are considered to be free variables in the declaration and hence, the name FreeRegion. These two fields of freeregion are of interest to us. The BrNamed region refers to the named region x. The second corresponds to the anonymous region y and that’s why BrAnon .

Now that we understand the components of the error, let’s move on to the next part which is to write code that prints

^ consider changing the type of `y` to `&'a i32`

Firstly, we need the parameter corresponding to the anonymous region`y` and secondly, what the new type of y `&’a i32`.

This function

find_arg_with_anonymous_region(&self, anon_region: Region<’tcx>, named_region: Region<’tcx>) -> Option<(&hir::Arg, ty::Ty<’tcx>)>

does both the above tasks for us. It returns the hir::Arg for y, and the type &'a i32, which is the type of y but with the anonymous region replaced with 'a. We are looking for the arguments in the Function Declaration and given the arguments we can get the one with the anonymous region. This is how an Argument is defined.

Arg { pat: pat(8: y), id: NodeId(7) }

Getting hold of the arguments itself wasn’t as easy as it seemed. Our initial thought process was to extract it from the FnDecl . We realised that that wouldn’t work as the regions we were looking for are a part of the hir::Body.

This is how the Fn Body is represented in code.

pub struct Body {    
pub arguments: HirVec<Arg>,
pub value: Expr
}

The code snippet below extracts the Body from the defId (freeregion.scope) of the free region and then iterates over the arguments of the Body.

ty::ReFree(ref free_region) => {

let id = free_region.scope;// is of type DefId,which identifies a particular definition let node_id = self.tcx.hir.as_local_node_id(id).unwrap();
let body_id = self.tcx.hir.maybe_body_owned_by(node_id).unwrap();

let body = self.tcx.hir.body(body_id);
body.arguments
.iter()
.filter_map(|arg| if let Some(tables) = self.in_progress_tables {
let ty = tables.borrow().node_id_to_type(arg.id);// Look up the type of  the argument.

The variable tables stores the Ty<’tcx’> corresponding to hir Node id’s.

let new_arg_ty = self.tcx.fold_regions(&ty,&mut false,|r, _| 
if *r == *anon_region {
found_anon_region = true;
named_region
} else {
r
});

A TypeFolder, walks over a type and lets you change it as it goes. TypeFolder has a method fold_regions which we override to look for the anonymous region and return the Ty of the named region instead i.e.new_arg_ty.

This is how my mentor Niko Matsakis explained walking over a Type to me. Vec<Vec<&i32>> when walked gives

  • Vec<Vec<&i32>> — regions() is vec![]
  • Vec<&i32> — regions() is vec![]
  • &R i32 — regions() is vec![R]
  • i32 — regions() is vec![]

As we get the anonymous argument and the new type, we are ready to go. All we need to do is print the error message.

struct_span_err!(self.tcx.sess,span,E0611,"explicit lifetime required in parameter type")
.span_label(var.pat.span,
format!("consider changing type to `{}`", new_ty))
.span_label(span, format!("lifetime `{}` required", named))
.emit();

The post comes to an end but not the introduction to E0611. There’ll be more of the journey in follow-up posts. Till then, GoodBye!

Gauri P Kholkar | Stories by GeekyTwoShoes11 on Medium | 2017-06-21 16:36:20

Understanding Lifetimes

Let’s start with a simple example, a fn foo which gives us max(x,y) as the return value.

fn foo<'a>(x: &'a i32, y: &i32) -> &'a i32 {
    if x > y { x } else { y }
}

Let’s compile the code. Error!!!

error[E0312]: lifetime of reference outlives lifetime of borrowed content...
--> <anon>:3:27
|
3 | if x > y { x } else { y }
| ^
|
note: ...the reference is valid for the lifetime 'a as defined on the body at 2:43...
--> <anon>:2:44
|
2 | fn foo<'a>(x: &'a i32, y: &i32) -> &'a i32 {
| ____________________________________________^ starting here...
3 | | if x > y { x } else { y }
4 | | }
| |_^ ...ending here
note: ...but the borrowed content is only valid for the anonymous lifetime #1 defined on the body at 2:43
--> <anon>:2:44
|
2 | fn foo<'a>(x: &'a i32, y: &i32) -> &'a i32 {
| ____________________________________________^ starting here...
3 | | if x > y { x } else { y }
4 | | }
| |_^ ...ending here

error: aborting due to previous error

All the error message says it that you should write y: &'a i32 in the function declaration instead. Here’s the correct syntax fn foo<'a>(x: &'a i32, y: &'a i32) -> &'a i32 . Now, consider the following error message snippet.

fn foo<'a>(x: &'a i32, y: &i32) -> &'a i32 {
                       - consider changing the type of `y` to `&'a i32`
if x > y { x } else { y }
                      ^ this reference must have lifetime 'a
}

Doesn’t the above seem to be a simple yet precise error message? My project for my Outreachy Internship at Mozilla is an extension of what I just mentioned above and I am being mentored by Mr. Nicholas Matsakis, Researcher at Mozilla. We will be working on improving the error messages related to common lifetime errors in Rust, in an effort to make Lifetime Error Recovery experience a faster, smoother and a reassuring one. Moving away from generic error messages, these messages intend to tell the user what the problem is and how to fix it.

How does lifetime come into play here?

Consider the following code

fn main(){
    let a = 1;
let max;
{
let b = 2;
max = foo(&a,&b);
} //scope of b ends here
}//scope of max ends here ('a)

This gives an error. Let’s understand why. Here b goes out of scope before maxdoes and that’s how Rust prevents a case of the classic dangling pointer problem. The y: &'a i32 says is that the lifetime of y must be atleast equal to the lifetime of the return value maxwhich is 'a. Without the 'a parameter, the compiler assumes it to have an anonymous lifetime and cannot figure out whether the lifetime is larger than 'a . Hence, it throws up the lifetime of reference outlives lifetime of borrowed value error. These very features of Ownership and Lifetimes are why Rust guarantees memory safety and are very important to understand.

Sounds fun right? I’m pretty excited at the prospect of getting to dig deeper into Rust compiler and understanding the concept of Lifetimes better!. Also, this is my first technical blog and suggestions are welcome :). Thanks for reading!!!

Gauri P Kholkar | Stories by GeekyTwoShoes11 on Medium | 2017-06-20 19:50:54

The purpose of this blog post is to compare and contrast Flexbox and CSS Grid. These are a few points for comparison:

  • How to conceptualize and plan layout
  • Ideal use case scenarios
  • Ease of use

Caveat: My approach to web development is fairly quick and dirty, having come from more startup type experiences (as Tech Writer and FE Dev contractor) than not. As a result, my biases are towards learning the minimal concepts necessary to get about 80% typical use cases done.

How to Conceptualize/Plan Layout with Flexbox

When planning out one responsive layout design in both Flexbox and CSS Grid, I felt like the main difference for thinking about Flexbox were the ideas of:

  • content container
  • how data should flow through a content container

These are some sketches used to visualize how the grid content would flow across breakpoints.

wireframe for flexbox design (mobile)

Wireframe sketch for flexbox responsive design (mobile)

wireframe sketch for flexbox design (tablet)

Wireframe sketch for flexbox responsive design (tablet)

wireframe sketch for flexbox design (desktop)

Wireframe sketch for flexbox responsive design (desktop)

By scanning the planning sketches hopefully you could get an idea that all that was necessary was to:

  • understand the number of columns per view (also the relative proportion of space needed by each column)
  • understand how content should lay out within the column
  • understand how the repeating pattern of data should display

How to Conceptualize/Plan Layout with CSS Grid

For CSS Grid, I felt like it was more important to sketch out a precise grid, as though the webpage were like a piece of graph paper and I had to lay out the skeleton that would support the content.

wireframe for CSS grid design (tablet)

Wireframe sketch for responsive CSS grid design (tablet)

Note: To be technically correct for a 2 column grid with margins (far left and far right), this should be a 4 column diagram. The far left and far right columns would actually be empty spacers. The resulting vertical line count would be 1-2-3-4-5.

TODO: redraw the grids with proper columns

 

Ideal Use Case Scenarios

My belief is that flexbox is a better choice when the required grid is relatively simple and when the data to be displayed is dynamic (for example, coming from a database and the total number of records returned from a query is not always constant).

On the other hand, if the required grid is relatively complex and the desired design layout needs to be pixel perfect, CSS Grid allows for a more granular level of control. Also if the data to be displayed is static, CSS Grid could be an option.

Ease of Use

Generally my preferred tool would be flexbox because the same CSS would easily scale to a dynamic size of data.

In contrast, I felt with CSS grid, there would be a bit of calculation and scripting required to make the CSS able to support a dynamic size of data. See CSS in the sample source (about line 60 and later).

What do you think? If it seems like CSS grid is more versatile, please explain your use case in the comments.

Examples

Flexbox example: Demo | Source

CSS Grid example: Demo | Source

 

Resources

https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout

https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout/Box_Alignment_in_CSS_Grid_Layout

https://gridbyexample.com/ (not used for this article but a good one)

For Flexbox resources, please see a previous article.


Carol Chung | CarroTech | 2017-06-19 23:19:32

Test Automation with Python

The first week of my internship was spent primarily setting up my dev environment on the laptop I received.

I had also never used a Mac before, so there were some basics to learn as well.

Although, since MacOS is Unix-based now, all of the important things were pretty much the same.

(I’m worthless with a Windows command line)

Once I had everything configured, I started fiddling with python and pytest.

I’ve done some development in python, like a scrabble-esque game, but I’d never written tests in python before.

axe-core

In order to automate regression testing for accessibility, we need an API of some sort.

I did some research on the available web accessibility APIs before settling on the axe-core API.

The axe-core API was created by Deque, a company that specializes in accessibility.

Deque offers assessment services, certifications, and more.

axe-core is written in JavaScript and distributed as an an npm package.

Eventually, I will create a more seamless integration of axe-core into python, by writing a python library.

DequeLabs did this with Java in their tool axe-selenium-java.

To get started, though, I will be using Selenium’s execute_script() function to handle the JavaScript directly.

Testing with aXe

The aXe API tests a website against a configurable set of rules.

These rules are based on WCAG 2.0, Section 508, and best practices endorsed by Deque.

The test loads an instance of Firefox, injects the axe-core script, and then injects a custom script that runs the API.

I didn’t have a way to directly pass data from the JavaScript back to python (this will be addressed by the axe_selenium_python package I’m writing).

For the purpose of this test, I used JavaScript to write the JSON results to an element in the DOM.

I then grabbed the contents of that element using Selenium in my python test.

For my first iteration of the test, I simply asserted that there were no violations with an impact of critical.

test_critical_violations.py


class TestCriticalViolations:

    @pytest.mark.nondestructive
    def test_accessibility_critical_violations(self):
        # List to hold info about critical violations
        criticalViolations = []
        data = json.load(open('./result.json'))
        # Iterate through violations
        for item in data['violations']:
            # Find all critical violations
            if item['impact'] == 'critical':
                # Add description to list
                criticalViolations.append(item['help'])

        # Assert that no critical violations are found
        assert len(criticalViolations) == 0, 'Critical Failures found'

script.js


// Get axe-core rules
// Default setting is all rules
axe.getRules();
// Run axe
axe.run()
// On success, process results
.then(function(result){
  console.log(result);
  // Get element
  var window = document.getElementsByTagName('html');
  // Create new div element
  var node = document.createElement('DIV');
  // Populate div with results text
  var textNode = document.createTextNode(JSON.stringify(result));
  node.appendChild(textNode);
  // Add selector to element
  node.setAttribute('id', 'axe-result');
  // And append to element
  window[0].appendChild(node);
});

This first version of my test used the python packages pytest and selenium.

I modified the test to use the pytest-selenium package, written by Dave Hunt of the Firefox Test Engineering team.

This package adds some additional functionality, and makes testing with selenium a little easier.

I added another package, pytest-html (also written by Dave Hunt), to generate an HTML report of the pytest results.

I also implemented the tests with tox, so that I could run the tests in both Python 2.7 and Python 3 simultaneously.

I then wrote a test for each rule used by axe-core, to get more meaningful output from the test suite.

Instead of seeing that critical violations were found, I am now able to see each test that passed and each test that failed.

test_accessibility.py


. . .

@pytest.mark.nondestructive
def test_accesskeys(self):
    """Ensures every accesskey attribute value is unique."""
    assert test_results.get('accesskeys') is None, test_results['accesskeys'].help

@pytest.mark.nondestructive
def test_area_alt(self):
    """Ensures 
 elements of image maps have alternate text."""
    assert test_results.get('area-alt') is None, test_results['area-alt'].help

@pytest.mark.nondestructive
def test_aria_allowed_attr(self):
    """Ensures ARIA attributes are allowed for an element's role."""
    assert test_results.get('aria-allowed-attr') is None, test_results['aria-allowed-attr'].help

. . .
pytest python pytest-html pytest pytest-html python

Now that I understand how to use pytest and Selenium, the next step will be learning how to write Python packages.

Kimberly Pennington | Kimberly the Geek | 2017-06-19 17:30:55

In my work, I once had to deal with a piece of ye olde code with a complicated orchestration of callbacks. Driven by ambition to refactor the thing into promises, I was faced with the fact that, as it turns out, I don’t really understand promises!

By googling I found this excellent article which I highly recommend. In this post, I’d like to summarize it, consolidate my knowledge and maybe add some explanation (it took my brain quite a period of diffuse mode to understand the article).

When building a chain of promises with then(), it’s important to keep an eye on two things: the order in which the functions will be executed and the values they will get as arguments.

Important thing to note is that when a higher order function gets a lambda like this: foo(bar), the bar lambda should be called inside the higher-order foo in order to be executed. Obviously, it starts after the higher-order had started. But if you pass the lambda like this: foo(bar()), the lambda will be called beforehand, so it’s actually not a lambda anymore, and you’re essentially just passing in whatever bar returns. Attempts to call bar later in foo will result in everyone’s favorite TypeError: bar is not a function.

Another thing to remember is that then wants only functions. According to the spec, if passed whatever else, it ignores that, so you’re essentially getting a piece of code that doesn’t quite participate in your chain. Code like .then(bar()) (note that there’s no lambda here!) calls bar, but it doesn’t affect the chain in any way, and so the lambda of the next then gets called at the same time, and gets the result of the then before the ill-fated .then(bar()) (puzzle 3 in the article).

If you declare your then lambda on the spot and don’t make it take any arguments (.then(function() {...})), obviously it will not get the result of the previous promise (see puzzles 1 and 2 in the article). If it should process the result of the previous promise, it should either take the argument (.then(function(result) {...})) or be passed in as a named function that does take in some arguments (function bar(x){…} and .then(bar)).

And in terms of the order of execution (apart from the case when you make your lambda a non-lambda by calling it while — essentially before — passing in as I described above), it’s very simple: if the promise resolution procedure takes place, the then lambdas are called in sequence. If it doesn’t, the next next’s lambda will be called simultaneously with it. And in order for the promise resolution procedure to take place, your lambda must return something (or throw an error), as in puzzles 1 and 4 in the article.

Irene Storozhko | Stories by Irene on Medium | 2017-06-19 04:13:44

It’s been 4 weeks now, since we started working on Lightbeam 2.0. And we have an MVP!

Here are my earlier posts on Outreachy and Lightbeam:

The first milestone in this internship is this initial MVP. Every time, I and Bianca went astray from this milestone, we had our mentor Jonathan to guide and help us focus towards this direction.

Here is the summary of what’s been achieved so far:

Setup the basic web extension

lightbeam-setup-initial

The Backend

  • Setup the communication with the browser and capture the first & third party websites
  • Create the capture & viz objects and the store API
    • capture object stores into the store object
    • viz object draws to a canvas
    • The lightbeam page code reads from the store, draws with the viz to a canvas in index.html

The Frontend

  • UI
    • In this initial MVP, the goal was to replicate the old UI. CSS Grid layout is used in the new codebase
  • Visualisations
    • While visualisations will be a major part in the post-MVP, in this initial MVP we are drawing small circles using the Canvas API to show the first and the third party websites

Overall, the whole team is very happy with the work and the progress done so far.

Presenting, Lightbeam 2.0

Screen Shot 2017-06-18 at 18.46.11

 


Princiya Marina Sequeira | P's Blog | 2017-06-18 16:50:05

Last two weeks involved exploring nftables and working on tasks related to it.

Task 1
I was assigned a bug related to list ruleset stateless option. In the previous post,
remember I had mentioned about list ruleset. nftables supports listing of stateless information through -s option.
$ nft add table ip firewall
$ nft add set firewall host { type ipv4_addr\; flags timeout\; }
$ nft add element firewall host { 10.0.0.2 timeout 10m }
$ nft list ruleset -s
table ip firewall {
set host {
type ipv4_addr
flags timeout
elements = { 10.0.0.2 timeout 10m expires 9m50s }
}

But expires is a stateful information and it should not be displayed with -s option. The fix for it is adding an additional check for stateless variable before printing expires and this is the patch for it.

Task 2
Output toggles numeric, stateless, ip2name and handle were declared as global variables. I had to pass these toggles as members of structure output_ctx. As nftables library will be created soon. These variables should not be declared as global variables anymore.
struct output_ctx {
unsigned int numeric;
unsigned int stateless;
unsigned int ip2name;
unsigned int handle;
};

These variables refer to different listing options. Each of these options have specific functionality. To understand it, let us create a ruleset and apply these options on it.
$ nft add table ip foo
$ nft add chain ip foo bar{type filter hook output priority 0 \;}
$ nft add rule ip foo bar tcp dport https counter
$ nft add rule foo bar ip daddr 8.8.8.8 drop
$ nft list ruleset
table ip foo {
chain bar {
type filter hook output priority 0; policy accept;
tcp dport https counter packets 50 bytes 11551
ip daddr 8.8.8.8 drop
}
}

Numeric option is denoted by ‘n’. When specified once, shows network address numerically and is the default option. Displays port numbers, if specified two times.
When specified thrice it shows protocols, user IDs, and group IDs numerically.
$ nft list ruleset -nn
table ip foo {
chain bar {
type filter hook output priority 0; policy accept;
tcp dport 443 counter packets 2203 bytes 265040
ip daddr 8.8.8.8 drop
}
}

Stateless option removes stateless information.
$ nft list ruleset -s
table ip foo {
chain bar {
type filter hook output priority 0; policy accept;
tcp dport https counter
ip daddr 8.8.8.8 drop
}
}

Handle number is denoted by 'a' and outputs rule handle.
$ nft list ruleset -a
table ip foo {
chain bar {
type filter hook output priority 0; policy accept;
tcp dport https counter packets 2491 bytes 334457 # handle 2
ip daddr 8.8.8.8 drop # handle 4
}
}

'N' option translates IP addresses to domain names.
$ nft list ruleset -N
table ip foo {
chain bar {
type filter hook output priority 0; policy accept;
tcp dport https counter packets 2621 bytes 380140
ip daddr google-public-dns-a.google.com drop
}
}

Let's get back to structure output_ctx. The structure needs to be passed such that it is available to all functions, where these variables are used. There are many functions which use these variables. nft_run() is an important function and is called by the main() function. It calls functions nft_parse() and nft_link(), which performs the actions required on the input command and displays the result.
int nft_run(void *scanner, struct parser_state *state, struct list_head *msgs, struct output_ctx *octx);
static int nft_netlink(struct parser_state *state, struct list_head *msgs, struct output_ctx *octx);
extern int nft_parse(void *, struct parser_state *state);

By using grep task become easier in finding the places where these variables are used. I had to back trace from the functions where these variables were used till nft_parse and pass the structure wherever required. This is the patch for task 2.


Varsha Rao | Varsha's Blog | 2017-06-17 20:23:01

This is a blog post about interfaces in Go. I wanted to write about a headscratcher that cost me several hours of work when I first started learning Go, and I figured I might as well start from the beginning and write the article on interfaces that I wish I had read back then. The story of my encounter with nil interfaces is coming soon, but for now, here’s a brief and hopefully accessible piece on interfaces in Go.1 So, without further ado, I give you…

What Is an Interface?

Coming from the dynamically-typed wild west of Python, one of the bits of Go that took the most getting used to was the idea of interfaces. An interface is a way of typing things according to their methods. If I want a function that can take any number of different types, so long as they have a given method (or two, or five) in common, I’ll want to use an interface to accomplish this (since I can’t pass in any old thing because of Go’s type safety rules). To give a concrete example, say I’ve got these classes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
type octopus struct {
    numTentacles int
}

func (octopus) ooze() string {
    return "ink"
}

type slug struct {
    salted bool
}

func (slug) ooze() string {
    return "slime"
}

slug and octopus are their own types, but both have ooze() methods. If I wanted a function to make use of the ooze method, and didn’t know how to make effective use of interfaces, I might write something like this. Note that interface{} is a wild card and I’ll explain why in a minute… but for now, just accept that this is the way we can allow this function to take either a slug OR an octopus (…or anything else, unfortunately) without Go complaining at us.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
func oozeAttack(slugOrOctopus interface{}) string {
    switch oozingThing := slugOrOctopus.(type) {
        case slug:
            // cast oozingThing as a slug
            return fmt.Sprintf("You got %s’d!", oozingThing.ooze())
        case octopus:
            // cast oozingThing as an octopus
            return fmt.Sprintf("You got %s’d!", oozingThing.ooze())
        default:
            panic(```This thing doesn't know how to ooze!
            ...It sucks that you were able to pass this in
            without the compiler complaining at you, but
            here we are.```)
    }
}

Ugh. Awkward, right? And it has repeated code, and it can potentially panic b/c we have no guarantees of the type of the thing we passed, and… ugh. No good. But luckily, I can use interfaces as they were meant to be used, and suddenly my code is a lot prettier:

1
2
3
4
5
6
7
8
9
type oozer interface{
    // the signature of a function called "ooze",
    // which takes no args and returns a string
    ooze() string
}

func oozeAttack(o oozer) string {
    return fmt.Sprintf("You got %s’d!", o.ooze())
}

If an object has all of the methods required for an interface, we say that that object implements (or satisfies) that interface. In this case, both octopus and slug implement oozer because they both have ooze() methods. The compiler can check this for us, so we know that anything we pass into oozeAttack has an ooze() method and won’t break out code—in stark contrast to the example above, where we could pass in literally anything and just had to pray that it wouldn’t cause a panic.

Okay, But What Is an interface{}?

If you’ve been using Go for more than a couple of days, you’ve probably stumbled across interface{}, the mythical and mysterious empty interface (click here for dramatic effect). (I even used it in the example above.) The empty interface baffled me for a long time. I understood that practically, it was a type wildcard—you used it anywhere you weren’t sure of the type of a thing. If I have a function that’s going to get passed some thing but I don’t know what the type of that thing is, I’ll use interface{} so nothing breaks:

1
2
3
func printMysteryObject(thing interface{}) {
        fmt.Printf("Your mystery thing is: %v", thing)
}

But it was only after I started thinking about what interfaces actually are, and reading some blog posts, that I figured out why this works. interface{} is this:

1
2
3
type BoringInterface interface {
        // … nothing to see here …
}

It’s an interface that requires no methods! And so any object at all will satisfy this interface, because any object in Go has 0+ methods. I finally understand what the flip this thing is. So exciting.

Stay tuned for part 2 in this series, “When Interfaces Go Nil (dun dun dunnn)”.


  1. I need to make the disclaimer that lots of other folks have written about this, and the Go blogpost on The Laws of Reflection probably explains this stuff better than I do. That said, I hope this blog post is more to the point, and perhaps more entertaining. (Mad props to Travis McDemus for inspiration for this excellent example of how interfaces work, which I find 100% more accessible than the io.Reader/Writer examples that get used in all the canonical Go blogposts about interfaces.)

Maia Remez McCormick | Maia McCormick | 2017-06-17 13:04:02

This is the CCS811 Air Quality Sensor. CCS811

It can sense a wide range of Total Volatile Organic Compounds. This means that it can measure the level of, for example, CO2(carbon dioxide) in the room, so it gives us information about the indoor air quality. This is the block diagram from the datasheet: Block Diagram

As mentioned in a previous post, my job is to write a driver for this sensor, using the IIO interface. The sensor comes on a breakout board, as you can see in the pictures. This allows us to use it as an I2C device, but I still need a way to connect it to my system.

For this reason, NEO comes to rescue:

Neo

Just kidding, this NEO, UDOO NEO:

UDOO NEO

The UDOO NEO is an open hardware computer that embeds 2 cores(1GHz ARM® Cortex-A9 and ARM Cortex-M4) on the same processor(NXP i.MX 6SoloX). You can see full specs here: https://www.udoo.org/udoo-neo/.

As additional memory for the UDOO NEO, I’ve received a microSD card that holds the bootloader, the Linux Kernel and a File System.

Putting all the parts together:

hardware

Now, regarding the software, the first step is to clone the kernel:

git clone https://www.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git

The next step is to compile it. UDOO NEO has a different architecture than my machine. When the host(the computer on which the compiler runs: my machine) has a different architecture than the target(the computer on which we want the program to run: UDOO) we need to use a cross-compiler. As suggested on UDOO website, the following packages are required:

sudo apt-get install gawk wget git diffstat unzip texinfo gcc-multilib \
     build-essential chrpath socat libsdl1.2-dev xterm picocom ncurses-dev lzop \
     gcc-arm-linux-gnueabihf

Then, after choosing the right configuration, we compile the sources:

ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- make zImage -j5

The ARCH variable specifies that we want to compile for an ARM architecture and CROSS_COMPILE is the prefix appended to the name of the tools used to generate the executables. The “-jX” parameter specifies the number of threads used.

After the compilation is ready, we run:

ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- make dtbs -j5

in order to compile the device trees, and then:

ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- make modules -j5

for the modules. Now, we can copy the kernel to the SD card.

This is it! The setup is done!

Narcisa Vasile | Narcisa Vasile | 2017-06-17 00:00:00

Multi Language S3 Testing - Go TestSuite

About the Project

The project aims at implementing a set of tests to test RGW’s S3 interface. RGW features an AWS s3 like interface against which tests are to be run. Currently there exists a python suite of tests implemented here.This test suite uses an SDK from Amazon called Boto, and has 400 some tests in it. The goal for this project is to have other test suites, in other languages specifically (c++ , java and Go), that all test the same subset of the 400 python tests.

The major deliverables of this project is to implement three test suites in :

  • Golang
  • Java and
  • C++

If time allows, Then the tests shall also be connected to the Ceph Integration framework called Teuthology.

Project Execution

I have set up a project Tracker and trello board for different people to track progress on the project. The goal is ensure the Time line is followed in order to meet the deliverables. The tracker has a summary of my daily tasks in the daily log and summary of the weekly tasks in the Week logs.

Project repositories

I started working on the Golang test suite and the source code lives here:

go_s3tests

Achievements during this Time

Set Up

Getting my environment to work conviniently took abit of time. At first I has running the rados gate way on my laptop in a virtual machine and connecting my tests to it. I then realised after a few times, my machine would just hang. I therefore asked my mentor if I could instead setup Ceph on another machine and connect the tests on my laptop to the rados gate way on another host.

This worked fine with guidance from my mentors.

Connected Golang test Suite to RGW

After setting up, I developed a skeleton of the project and before I could write any tests I had to do some hackery to connect to RGW(rados gate way ). Fortunately my mentor had a working Go client that connects to RGW (rados gate way )which I gladly reverse engineered to meet the project needs and was able to hook my tests to RGW(rados gate way).

Written 25 running tests so far

After connecting to RGW for the s3 interface, I started working on the tests. My mentor asked me to implement eight tests annotated with explanations in comments to see if I understand their goal. I did this and got feedback on the same.

After this task, I was then started implementing the tests on bucket and object operations. I have written 25 tests to date and still getting feedback as I write more tests.

The feedback was about:

  • Ensuring that my codefiles end with newline.
  • Ensuring that my config is consumable by teuthology.
  • Ensuring that the instructions for running the tests actually work.

Blockers

I do not have blockers on my end however I am facing a challenge where my mentor is not able to run the tests in his environment. We will have a chat this coming week to rectify this when he gives more detail.

What I have learned

Amazon Web Services

The major capacity building high light in these first two weeks has been being able to use the AWS Go SDK to perform s3 object storage operations. I have written logic for most operations during implemetation of the test suite. The community supporting this SDk is good and this even makes life simple. I faced one major challenge adding AWS if-none-match headers on UploadInput which had not be implemented but there was a work around for that.

Working with Go Configuration

I have also learned alot about handling configs with Go. I got to know how to use viper for loading config data. it features simple ways of reading the data as we use the dot notation without need for writing extra structs. Also the fact that it supports all config types really stood out for me as the config type is not tied to the implementation. Therefore a user can decide to use any config type of choice.

I decided to use .toml config. Implementing this was this simple:

config.toml

[DEFAULT]

host = "s3.amazonaws.com"
port = "8080"
is_secure = "yes"

[fixtures]

bucket_prefix = "jnans"

[s3main]

access_key = "0555b35654ad1656d804"
access_secret = "h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q=="
bucket = "bucket1"
region = "mexico"
endpoint = "http://localhost:8000/"
display_name = ""
email = "someone@gmail.com"

Loading the configuration

func LoadConfig() error {

	viper.SetConfigName("config")  
  	viper.AddConfigPath("../")

  	err := viper.ReadInConfig() 
  	if err != nil {
    	fmt.Println("Config file not found...")
  	}

  	return err
}

var err = LoadConfig()

var creds = credentials.NewStaticCredentials(viper.GetString("s3main.access_key"), viper.GetString("s3main.access_secret"), "")

var cfg = aws.NewConfig().WithRegion(viper.GetString("s3main.region")).
	WithEndpoint(viper.GetString("s3main.endpoint")).
	WithDisableSSL(true).
	WithLogLevel(3).
	WithS3ForcePathStyle(true).
	WithCredentials(creds)

Golang

I have been writing Golang since september 2016 but still confess I am learning new ways of doing things in the language. There have been a few ramblings for the abscence of features I valued so much in other languages like default parameters but I have found a way around. However I feel this is on my wish list for the language.

Tasks for the Next Period.

Work on Open issues

There is one issue opened on the project so far and I will discuss with the author this week so that I can close it.

Ensure I have 60% test implementation

The next two weeks will be focused on ensuring a good number of tests implemented in the python test suite is impelemented according to plan before July.

Integrating to teuthology

I will also work on integrating the test suite to teuthology before I move on to the next test suite.

Joannah Nanjekye | Joannah Nanjekye | 2017-06-17 00:00:00

My first weeks as an Outreachy intern at Cadasta have been incredible. As agreed with my mentors Chandra and Kate, my starting task was to enhance the user experience by adding the language option in the user account. The result of this work in available on this PR on Github. As you can see, Cadasta uses Python with Django. To achieve the goal, on my first day I made a detailed task-list, which you can see on the PR description section. Fortunately, thanks to the fact I have already contributed during the application process, the setting up of the environment and the Git Flow were already known to me. This meant I could quickly start with the tasks. I’m going to write the next paragraphs highlighting challenges and moments of pride for each chunk of work:

###### 1. Handling a migration

Challenges: It was the first time I worked with migrations and I didn’t have a clear idea of what it required to do. Following some online tutorials, I just learned how to run commands such as python manage.py makemigrations and python manage.py migrate. At first I believed this was all. Though I encountered a problem and an unclear error appeared -- I was unable to figure it out on my own. After asking for help, I was told the error was linked probably to the version of Python (2.X instead of 3.X). After fixing this problem, later on I also discovered I had to add, commit and push a migration file to Git Hub in order to add the model change to the PR -- another information I didn’t know and I needed help from someone else in order to figure out.

Moments of pride: I didn’t have one to be honest. But in retrospect I acknowledge the importance of asking for help with detailed information and at the right time. I was unblocked very quickly with the help from other people.

2. Activating translation of language stored in the user model:

Challenges: One of the biggest challenges I faced was to find detailed information (and understand) the Django documentation over topics such as Internationalization and writing custom middlewares. Some specific questions on how to activate the language stored in the user.language field were not available (or very hard to find) neither in StackOverflow and other websites. Further, the Django page on custom middlewares was missing key information such as the fact that middlewares should return None if not a response.

Moments of pride: I was delighted when I saw it working at the end: in this tough process I learned a lot and I built for the first time a middleware and a test using the Python unittest module Mock.

3. Adding feature to the API

Challenges: Again the main major block of this task was the lack of detailed cases in the Cadasta API Documentation. One of the reviewers of my PR required this addition by just mentioning it, and unfortunately my mentors were not able to guide me and quickly address my questions on how to start with this. For an entire day I was blocked and I could not understand where to find detailed information. However, on my second day somebody from the dev team helped me a lot and I could finally figure out everything.

Moments of pride: Firstly, I used my experience to improve the existing documentation: I wrote a mock up with curl command examples for each section and I shared this document with people in charge of the documentation upgrade -- my initiative was highly appreciated! Secondly, I was proud of myself since I learned a lot: although I needed the initial help to start, I then figured out everything on my own by reading existing code and pages on the Django Rest Framework.

Summary

My start at Cadasta was incredible since I felt a lot the ownership for my work. Facing challenges such as learning new frameworks or understanding the documentation have ultimately been positive experiences -- this makes me feel more and more like a developer.I guess this is the magic of coding at the end of the day:

There’ll be always challenges, therefore there’ll be always gratifications and chances to learn.

-- my own words :)

Cadasta git hub repository

My first PR

My koalacoder website

My blog

Laura Barluzzi | My coding blog | 2017-06-16 00:00:00

KDE Akademy 2017Akademy 2017

Yes, I fear I have let my blog go a bit defunct. I have been very busy with a bit of a life re-invented after separation from my 18 year marriage. But all is now well in
the land of Scarlett Gately Clark. I have now settled into my new life in beautiful Payson, AZ. I landed my dream job with Blue Systems, and recently moved to team Neon, where I will be back
at what I am good at, debian style packaging! I also will be working on Plasma Mobile! Exciting times. I will be attending Akademy, though out of my own pocket as I was unable to
procure funding. ( I did not ask KDE E.V due to my failure to assist with KDE CI ) I don’t know what happened with CI, I turned around and it was all done. At least it got done, thanks Ben.
I do plan to assist in the future with CI tickets and the like, as soon as the documentation is done!
Harald and I will be hosting a Snappy BoF at Akademy, hope to see you there!

If you find any of my work useful, please consider a donation or become a patreon!
I have 500USD a month in student loans that are killing me. I also need funding for Sprints and
Akademy. Thank you for any assistance you can provide!
Patreon for Scarlett Clark (me)

Scarlett Clark | Home of Scarlett Gately Clark | 2017-06-15 20:16:07

Week 5/15.

This week I completed documenting section 4, which covered concepts : DOM, XPath, HTML and JavaScript. One of my mentors suggested that we need to include how a user can generate DOM and include Embedded Metadata.js and then write other fields on top of it. While updating translator for Oxford Reference, Philipp suggessted me the same. So I will be learning how to use existing translators when creating new ones. With this I am looking forward to finish common code blocks documentation by scraping wikimedia parallely and presenting it as example in this section. In later section, I will document about how to bring all these pieces together and build a complete translator in Scaffold.


Sonali Gupta | It's About Writing | 2017-06-15 18:35:11

The past week has gone by so fast for me – each day I have sat and learned a bit more about CRON job, the Echo mechanism and so on.

My little API need to activate a job that will be triggered somewhere in the future. Sounds simple, right?
But to write it in code that I am new in, in technology that I am new in, that might be trickier.

So I set, every day, progress much, and got a first draft which I am proud of.

I am trying to use a native function of the job object, but I can not tell if it is a correct use because – there is no use of it in all project(s).

So I will wait to our weekly meeting to hear from my mentors if this is a more accurate solution then the previous one I did last week.

Wish me luck!


Ela Opper | FoxyBrown | 2017-06-15 12:22:22

This is a tale of a dream come true. This story starts way back in 2016. I was working as a software developer in one of the tech firms in Uganda at the time. I was not happy and had decided that I rather go back to school to pursue my Aeronautical Engineering career. I have always had a craving to contribute to open source but did not know where to start.

I kept reading people’s Github profiles in admiration of especially people that had repositories in the category of “Repositories they contribute to”. I wanted this on my profile too but did not know how to. I did not have many experienced developers in my circles atleast so this kind of information I did not get easily.

One day I gave a talk at a meetup dubbed Geek night kampala on the transition from continous integration to delivery where I met some ladies who requested me to teach them alittle more on the topic a request I welcomed. After getting to know these ladies one of them told me about Rails Girls Summer of Code and suggested that we team up with her and apply for the same. Well we did and got accepted. This was the beginning to a dream coming true. I quit my job just before summer could begin in preparation for the golden opportunity.

First : Rails Girls Summer of Code

My open source Journey began here. I was accepted with a friend of mine to work on a project called Qutebrowser. We were called team Echo from Uganda. Qutebrowser is a vim-like browser based on PYQT5. If you are a vim fanatic then you probably want to try it. The beauty about this browser is that you browse the web using commands. Any one ? I wanted to contribute to this project because I wanted something challenging to work on and a browser wasnt bad as it had alot of new things I wanted to learn.

RGSoC was a huge success and a very memorable experience for me as I opened my first pull request then. The mentor was very welcoming and answered every question we asked him. I learned so much working on this project than even the experience I had from work. By and large I got to know that debugging is a very important skill when working on such big code bases, got to learn alot of python as I had been working with .Net and got to meet good people in the RGSoC community.

Through RGSoC I was also able to get a diversity grant to attend and speak at pyconZA, an opportunity that I have cherished todate because the conference opened my mind to many ideas. I gave a talk on how to contribute to open source

Now : Ceph Outreachy Intern working on multilanguage RGW Testing

After RGSoC which was September 2016, I attempted to apply for Outreachy in the December 2016 round with Mozzilla. I did not get through but one secret is you always learn alot through the application process. I did not get in but learned a new programming language Golang that am using on my current project. To be honest I felt so frustrated not being accepted but I can not underestimate the skills I got in the whole process. I however also noted the mistakes I had done.My proposal was not as good.

I then decided to keep my open source fire alive because I had really longed to be an active contributor. I reached out to pypy mentors in December 2016 to guide me on contributing. I therefore continued contributing to pypy until Outreachy called for applications again for the March round. I will still open PRs to pypy.

I applied again for the May Outreachy round and made sure I did not repeat the past mistakes however I think also luck plays a role in these kinda things because the Organisation you never expect actually accepts you. This round I applied to three companies. May 4th has been my most memorable day this year so far because on this day I signed a book contract with a publisher I have always wanted to work with and was accepted to outreachy. Double happiness right?

About Ceph’s Multilanguage RGW Testing project

I applied to ceph because of how interesting the project was. I anticipated I would build alot of capacity from working with Amazon Web services and working in three languages(Go, Java and C++) on three different test suites. My ambitious mind really wanted this.

The project entails writing tests for aws using amazon Go, Java and C++ SDKS. RGW features an AWS s3 like interface against which tests are to be run. Ceph already had a python test suite that was testing RGW and my work was to extend the tests to the other three languages.

I am very willing to guide someone with my mentor if they want to contribute to these tests suites as well. There are many tests to implement so feel free to reach out if interested.

The Future : A luta continua

My future in open source presents limitless opportunities. There are many places you can be but you will never be every where.I will consistently continue to contribute to the projects I have started especially with ceph. I have this passion for PYPY and believe it has a whole future a head of it because it is solving a common problem. I will thefore keep sending some patches to pypy and qutebrowser most likely after Outreachy because I have alot to do now.

Lessons I have learned

Take every opportunity to invest in others

I heard about my first open source oportunity when I offered to teach ladies I had met while speaking some where. The morale of this is when you invest in others you are also indirectly opening up opportunities for yourself. If I had not met these ladies I probably wouldnt have known about RGSoC and in effect would not have began contributing to open source.

Contribute with a goal of bettering yourself and the project

I know people contribute to open source for many reasons. Others want to get hired, some want to be famous and others to make money from summer programs. These are all logical reasons however it is better you contribute to better yourself carreer wise. Open source gives us a platform to show case our skills as we also become better. It has the right kind of mentors to guide you in areas where you are not sure as you work on projects that are impacting many people.

The other thing you should aim at is bettering the projects you are working on. Many of the tools we use are open source and the danger is they only get better if people like us voluntarily send patches to them. I personally use open source tools from my operating system to development tools name it. If the tools we use do not get better,then we will not be productive. Rubygems.org got hacked and was down for over a week and during this time hundreds of ruby developers remembered to contribute. Let us not wait for such terrific times because even then we wont be useful as such emergencies need people that are farmiliar with the project.

Strive to learn from every failure you face

I have failed at many things in that I very much identify with failure. I mean we all fail many times. The difference between two people that fail is the ability to learn from their mistakes and not repeat them. I failed to get into outreachy for one of the companies in the round I got accepted but from the application process I got inspired to write a book on pyton compatibility. Failure is not bad it depends on how you decide to view it. Also from failing the first time I applied, I endavoured to improve my proposal for the next round. Failure is therefore a learning opportunity.

Keep trying

People always succeed after trying and failing a couple of times. If you have never failed then you have probably not tried enough and if you stop trying then you will not live to land on that sucess you evision. Some open source tasks require patience as you communicate and interact with people and some times the frustration builds up and makes you want to quit.

Keep calm and keep Koding

If I said the journey is smooth, it would be the lie of the century. Some reviews will frustrate you, the code will jam at times, you will burn the midnight candle at times, meet mean people. Always remember to keep calm and continue hacking. May you have the calmness to ignore the things that dont matter , the focus for things that matter and the ability to know the difference!!!

Good News: You never walk the Journey alone

Whatever you want to achieve will cost you a little patience and commitment.There are people that have walked this path.Many communities exist with people very willing to guide you on the open source journey. I have been previledged to know about the The Open source help community. A community full of individuals that want to help others to start contributing. You can reach any one there for help and join the monthly chats with experts that have walked the open source path.

Maintainers are always happy to guide new contributors. There may be those exaggarated scenarios where mentors are not friendly but these are few.You just need to find a project you want to contribute to and reach out to the maintainers to see what you can work on. The best project to work on is for a project you use because then you can easily even suggest improvements to it because you are a user.

See you at the top!!!!

Joannah Nanjekye | Joannah Nanjekye | 2017-06-15 00:00:00

The First Day

May 30th, 2017 was the first day of my Outreachy internship with OpenAustralia Foundation. I started by having a video chat meeting with my mentors Luke and Henare, which we mostly spent getting to know each other, then I spent the following day prepping for the official kick-off meeting on day three.

The Project

Local Councillors Project (suggestions for a cooler name are welcome) is the project I will be working on for the next three months. This project is closely related to PlanningAlerts, which is an app that allows people to sign up for email notifications when there are new development applications in the area of their concern. Through PlanningAlerts users can also communicate with local councillors about these development proposals. Information about the local councillors is currently imported from a public spreadsheet that gets maintained by dedicated volunteers. The issue in current system is that in order for PlanningAlerts to import this local councillor data, it first needs to be converted from CVS to JSON, which requires someone to run a rake task from the terminal. This is not an easy task for volunteers who may not have programming knowledge, and is a lot to ask on top of their work gathering local councillor information that is scattered in all over the internet. That’s where the Local Councillors Project comes in. We want to create a system that makes it easier and more accessible for people to contribute data, so that other apps like PlanningAlerts can access the data of Local Councillors.

In the kick-off meeting we mapped out issues, solutions, and design principles. This is a new process for me, and was a very interesting and meaningful experience to take part in. Here is what we came up with:

The Problem

  1. PlanningAlerts needs data about local councillors so people can write to them concerning development proposals.
  2. PlanningAlerts covers 150 local councils.
  3. The councillors data changes periodically and irregularly and it needs to be updated manually by someone.
  4. The OpenAustralia Foundation team does not have the capacity to keep it up to date themselves.
  5. The current system requires a lot of programming knowledge to contribute the data. People don’t even know they can update the data.
  6. Very few people are able to make contribution, and so very few people do.

We know we will have solved the problem when...

PlanningAlerts has up-to-date councillor information for every authority it covers. This information is updated by the contributions of volunteers. We have an accessible, easy (and fun) system to update/add councillor information that acknowledges and celebrates the work of the volunteers.

Design Principles

  • Strive for diversifying data: invite people who are historically marginalized and excluded from conversations around technology and information, and intentionally build data structures that reflect those voices and lived experiences. The effort of achieving diversity needs to happen from multiple angles.
  • Make sure contributors understand the amazing impact they’re having.
  • Strive for universal accessibility.
  • Make the process of contribution obvious and intuitive.
  • Communicate clearly how people’s contributions are used.
  • Be welcoming for both new and existing contributors. Do adequate outreach as well as decreasing the barriers for new contributors.
  • Be supportive: people need to feel supported and encouraged in their process of contributing.
  • Share ownership: make sure people know they are a part of the community. Honour their labour and make sure they are able to receive the benefits.
  • Respect everyone’s time, including administrators.

How we work

  • Be agile, flexible, and responsive: decide on a small feature, implement, debug, repeat.
  • Communicate changes to people as they come up.
  • In case of conflict, return to the shared goal of the project and the problems we want to solve.
  • Reflect on what your beef is and where it’s coming from (maybe it’s not about the project).

What I’ve Learned Through the Process

I come from a background in community organizing, art-based education and facilitation, which makes it a big leap for me to be a developer. It is certainly very different, but through this process I could find some similarities between these different types of work. So I am writing this section to reflect, learn and digest in my own way. Two things I found difficult and valuable to learn were 1) how to break down a big issue into a set of small issues that deal with one thing at a time and 2) categorizing what goes where.

Breaking Down Complex Issues

When we were nailing down the problem, one thing I found unfamiliar, challenging, and refreshing was compartmentalizing a big problem to a set of individual issues.

In the past the way I worked as a facilitator was closer to creating an unordered list as opposed to an ordered list. As a facilitator I was trained to focus on the interconnection of issues and the dynamics among them rather than considering single issues in isolation. I can see both approaches are relevant in different contexts, but it took me a while to wrap my head around, and it still takes time for me to break things down to single issues.

Categorizing the Planning Process

This is related to breaking down a big problem into a set of small single issues. When we were creating the design principles for this project, I was constantly conflating design principles and solutions. Design principles should act as a bridge between a problem and a solution, so they are connected yet not the same thing. Again this challenge was rooted in breaking down the complicated process of planning a project that aims to solve a complex problem.

The “How We Work” section was originally part of my draft for the project’s design principles. My mentors’ feedback was that these points wouldn’t traditionally be categorized design principles since they are more about how we run the project, and less about guiding design decisions within the project. However, we all agreed that it was important to document these ideas somewhere, which lead to the introduction of a “How We Work” section

Both cases were great practice with applying computational thinking in real life scenarios. It helped me connect what I already know to this big new tech project.

Design Principles as a Collective Agreement

Before the kick-off meeting, Luke gave me a whole bunch of readings about design principles. Since I chose this project in the internship application, I was familiar with the issue we want to solve and had an abstract understanding of the solution (i.e. to build an app). But I never really knew what “design principles" meant in this context, and I had assumed it was probably about graphic design and the aesthetics of the project, but it turned out to be about design in a much bigger sense: how to communicate with users, and how to reach the goal of the project.

I’ve come to understand design principles as something similar to the idea of a collective agreement that I’m familiar with through my background in workshop facilitation A collective agreement is something that all workshop participants and facilitators create together at the very beginning of a workshop. It is a set of agreements designed to maximize the participants’ learning in the space and collectively move towards their goals. Some questions to ask when developing a collective agreement are:

  • What makes you feel safe(r)?
  • What makes you feel respected and heard?
  • What makes you feel encouraged to be explore your vulnerability and take risks?
  • and more

And some of the actual agreements to respond to those questions are:

  • Speak on your behalf (use “I” statements).
  • It is ok to feel uncomfortable.
  • One diva, one mic (one person speaks at one time with no interruptions).
  • Respect people’s gender pronouns.
  • Apologize and do better if you make mistakes.
  • and more

Learning is often times rocky, and there are moments we feel lost, sidetracked, misunderstood, and divided. But we need those moments to actually learn. A collective agreement is something to come back to in those moments, to help people remember their common ground and build from there each time.

Power Imbalance

Some of the issues we want to address in this project include:

  • The current system requires a lot of programming knowledge to contribute the data. People don’t even know they can update the data.
  • Very few people are able to make contribution, and so very few people do.

Those points indicate the barrier for the participation in civic tech and civic participation in city planning. To challenge that, we are trying to make civic participation easier and more accessible. So we need to center users by asking about their experiences, where they are coming from and what their needs are.

This is similar to what a facilitator should do in a group session: strive to be instrumental in centering participants’ needs to feel safe to express their ideas, feelings, and concerns, and to explore those together.

As much as a facilitator should be instrumental, it is important to remember they are a part of the group together, and everybody is sharing the process. It is convenient to believe facilitators are purely instrumental. However, a facilitator holds unique power, and have to try hard to minimize the hierarchy of power that exists between them and participants. Pretending like it doesn’t exist gives facilitators power they don’t have to be accountable for. Similarly, when it comes to the data that drives PlanningAlerts, people with programming knowledge hold the power. I want this project to reduce the division that power creates between people with programming knowledge and those without.

Why does it matter how we build, not only what we build?

Since we are building things to make civic participation more accessible, it is important to acknowledge we are also participants in civic tech, and how we treat each other matters. We worship efficiency so much, and end up building something great but with a terrible process. This could include a lack of transparency, bad communication, dismissing the safety or violating the privacy of users and workers. Can we still create something great if our process is terrible? Probably we can. But if what we want to achieve through this project or through the civic tech movement overall is to empower all people through technology, it defeats the purpose. Especially once our design principles say:

  • Strive for diversifying data: invite people who are historically marginalized and excluded from conversations around technology and information, and intentionally build data structures that reflect those voices and lived experiences.

How we build matters as much as what we build, because the process of building is already a part of diversifying data.

In Conclusion

I’m so proud of our design principles! Also it has been pretty useful guide for us. My next post, which I should be writing already, will be about diagramming our user flow and solution design. Anyhow, that’s it for now!

Hisayo Horie | blog@hisayo | 2017-06-14 04:00:00

Lately I’ve been reading flexbox docs. There are a lot of really good articles already out there (linked below). I’ll share what I’ve learned using a few practical examples with links to the source and a few observations about the process.

Sample 1 (simulated simple bootstrap grid)

This is not my first time working with flexbox but when I initially compared it to the bootstrap grid, I preferred bootstrap because I felt like I had a little more layout control.

One area where flexbox is much better than bootstrap is in footprint size. There is a lot of code in bootstrap that goes unused and defining a responsive grid using flexbox does not require very much code.

*Please click the upper left corner to get to the pen content and then the Edit on CODEPEN link (upper right) will also be available.

Note: this grid collapses in mobile device view also there are some observations in the content header.

Sample 2 (example of vertical layout)

This example specifically refers to when would you prefer to use

flex-direction: column 

One scenario could be if you have a bunch of dynamic content with similar widths but varying heights (photos or maybe work portfolio samples). Typically a grid system with this kind of content would force an alignment with a bunch of irregular white space between elements.

Tip: When you want the content to lay out with an even bottom edge (desktop/tablet views) you need to mind that the total height for elements in each column would equal the flex container height. Otherwise a jagged bottom edge could result (which I guess could be an option).

Note: This grid collapses in mobile device view.

 

Sample 3 (simulation of a marketing homepage)

To test the ease of use of flexbox, this is a simulation of a consumer hardware company’s homepage. They do great marketing design. This simulation does not go into the nitty gritty details (like navbar responsive design) but emulates their body/footer responsive design. It did not feel too labor intensive and the code footprint is still very light.

Note: the breakpoints are between mobile:tablet, and between tablet:desktop

https://codepen.io/cch5ng/full/gRwYPe/

Final tip: My initial observation when googling flexbox is that it is potentially a very complex topic to learn. Rather than starting with a site that contains a ton of reference info about each property, I like the link below by webdesignerwall because of their visual style of teaching and simplicity.

Afterward, please search further for more advanced tutorials on flexbox because this is only the tip of the iceberg (but hopefully helpful).

Resources

 


Carol Chung | CarroTech | 2017-06-13 22:24:19

Allocating a free PID is essentially looking for the first bit in the bitmap whose value is 0; this bit is then set to 1. Conversely, freeing a PID can be implemented by ‘toggling’ the corresponding bit from 1 to 0.

A PID is generally visible in multiple namespaces. The kernel has to iterate through the different namespaces and “free” the PID individually from all the namespaces. Also, during this operation, we would want a lock on the pidmap so as to ensure atomic updates.

    spin_lock_irqsave(&pidmap_lock, flags);
  

spin_lock_irq does two things. It takes the lock (protecting against activities on other codes) and it turns off interrupts (protecting against activities on the same core in interrupt handlers). Somewhere later the lock will be released and interrupts will be restored. But it could be that interrupts were already turned off at the point of the spin_lock call. In that case, when we release the lock, we don’t want to turn interrupts on. spin_lock_irqsave protects against this by storing the incoming interrupt status in the flags argument. The corresponding unlock function checks the flags argument to decide whether to turn interrupts back on or leave them off.

    for (i = 0; i <= pid->level; i++) {
struct upid *upid = pid->numbers + i;
struct pid_namespace *ns = upid->ns;
hlist_del_rcu(&upid->pid_chain);
switch(--ns->nr_hashed) {
case 2:
case 1:
wake_up_process(ns->child_reaper);
break;
case PIDNS_HASH_ADDING:
/* Handle a fork failure of the first process */
WARN_ON(ns->child_reaper);
ns->nr_hashed = 0;
/* fall through */
case 0:
schedule_work(&ns->proc_work);
break;
}
}

In the above code snippet, we iterate through all the PID levels, and at each level, we delete the pid hash from the rcu hash list. RCU(Read-Copy-Update) is a synchronization mechanism that allows reads to occur concurrently with updates. This hash was added in the alloc_pid() function using the following:

    hlist_add_head_rcu(&upid->pid_chain,
&pid_hash[pid_hashfn(upid->nr, upid->ns)]);

If the namespace has only one or two PIDs left, we want to clear the namespace. For every namespace, we have a field type nr_hashed which stores the number of PIDs that have been hashed to a particular namespace. If the number of PIDs that have been hashed to the namespace are one or two: We wake up the reaper. The reaper does memory management.

We wake up the reaper even when the number of hashed ids are 2, because one of the processes that might be exiting may be init(). It’s possible to get into a situation where a pid->ns reaper is <defunct>, re-parented to host pid 1, but never reaped. The wake_up call to the reaper will be spurious if init is not being exited, and is a waste of time but not a correctness issue.

If the number of PIDs hashed is 0 or the nr_hashed has the same value as when it was initialized(PIDNS_HASH_ADDING), we schedule cleaning up of the namespace which after a few function calls, does a kern_unmount().

    for (i = 0; i <= pid->level; i++)
free_pidmap(pid->numbers + i);

The next step is to call free_pidmap which simply clears the bit pertaining to that pid using the clear_bit() function. The code inside free_pidmap() is listed below.

    struct pidmap *map = upid->ns->pidmap + upid->nr / BITS_PER_PAGE;
int offset = upid->nr & BITS_PER_PAGE_MASK;
clear_bit(offset, map->page);

The function free_pid() finally calls call_rcu(&pid->rcu, delayed_put_pid);

The use of call_rcu() permits the caller of free_pid() to immediately regain control, without needing to worry further about the old version of the newly updated element. In other words, use of call_rcu() to ensure that any readers that might have references to the old pid structure complete before freeing the pid.

Gargi Sharma | Stories by Gargi Sharma on Medium | 2017-06-13 13:19:06

The Mozilla Learning Network rallies and connects leaders who want to advance the promise of the Internet for learning in a networked world. This is done by fuelling new approaches to digital learning through Mozilla Hives, Clubs and Gigabit cities.

As part of the Internet Health Basics learning activity, there is a module on Privacy and Security and there is a chapter on tracking cookies using Lightbeam.

The objective of this course is to:

  • Understand and explain cookies and how they track users’ Web browsing.
  • Understand and use online tools such as Lightbeam to monitor online tracking.
  • Reflect on distinguishing between appropriate and invasive uses of tracking.

My mentor at Outreachy – Lightbeam, Luke Crouch, conducts web literacy classes at the Tulsa Library. Recently, he conducted a session on Lightbeam and below is the summary of the answers from the questionnaire filled by the attendees.

Q: Have you ever used a tool like Lightbeam before?

  • Turned off the cookies button
  • OpenDNS which shows domains contacted

Q: If you had to summarise what Lightbeam does, what would you say?

  • Adds to the awareness of how little privacy we have
  • Informs users of tracking activity and who is tracking
  • Easy to view a centralised map of all cookie connections encountered during a browsing session
  • Visual display of Internet locations contacted
  • Lists your cookies in a graph
  • Visualisation tool for showing links between cookies and sites

Q: Was there anything that you found especially valuable? 

  • Graph was valuable along with link identification by colour.
  • Visual of the volume of the connection activity was interesting
  • Good to see logos
  • An eye opener

Q: Was there anything that you found especially confusing?

  • I would prefer the ability to set the graph size
  • Coloured lines were bit confusing

Q: Would you use Lightbeam again?

  • Yes
  • No
  • Maybe

Q: What habits can you develop – and what strategies and tools can you use – to prevent yourself from being followed online?

  • Use private browsing with tracking protection
  • Delete cookies
  • Clear the history
  • Incognito mode

Q: How might you learn more about online privacy? What would you search for?

  • Wouldn’t try to learn online because I don’t trust a lot of what’s out there. I learned from reading a book ‘I know who you are and know what you did
  • Search for privacy and legislative actions. Browse computer science and technology online reports and magazines.
  • Search for online privacy best practices
  • Go to the Tulsa library class 🙂

The above Q & A sheds light on how users find Lightbeam to be an eye opener in the web privacy space. This survey is also useful for us to improve the current UI/UX issues.

The existing Firefox addon will be revamped using web extensions during the course of my Outreachy internship. If you haven’t tried Lightbeam yet, you can instal this Firefox addon and leave us your feedback.

It’s time to make the Internet healthy 🙂


Princiya Marina Sequeira | P's Blog | 2017-06-13 12:37:21

Life, like surfing, is all about the wave selection and reading waves is a tough skill. If you are looking for the exponential wave to surf for career growth, then read on to unravel the mystery.

Have you ever pursued something difficult that seemed the logical thing to do to step up in your career, and then feeling motivated you went on to tackle the monster all by yourself, only to later find yourself almost crushed? If so, consider enrolling yourself in a mentorship program.

‘There are options in life. It’s not necessary that high achievements can only be garnered by choosing difficult options.’ – Dear Zindagi Movie

Typically, a mentor is an older, more experienced person who works with the mentee, on behalf of the mentee’s best interests and goals. Mentor-mentee relationships can either be formal – organized through a mentoring program or informal – established through connections. Most of the benefits of being mentored are commonly recognized, so here are the big ones as a refresher.

  • Structured learning can save you hours of struggle. The role your skill set will play in your career is pretty straightforward: The better you are at what you do, the more successful you will be.
  • Gain exposure from second-hand experiences. This reminds me of a well said quote by Otto von Bismarck – “Only a fool learns from his own mistakes. The wise man learns from the mistakes of others.”
  • Improve your performance.  We all know how easy it is to get lost in the woods on days of low self-motivation.
  • Build a network and make connections with peers that may otherwise take years to develop.

mentorshipProgram2

Remember not to be fooled by the impression, that the outcome of your career is dependent on the actions and inputs of others. The truth is, you are in charge of the chemical reaction in which the mentor – is merely a catalyst.

If you are wondering how to find a mentor, then here are a few tips. If you are still in college, a great place to start would be your own college, find a senior who has done what you aspire to do. This senior may be still be in college, or might be an alumnus whom you know.  You might as well explore options on the internet like LinkedIn where you can easily find people having the skill set you desire to achieve. I found mentorship to strengthen my programming skills at Coding Ninjas. They have some very good courses with awesome mentors. Kudos to Ankush and Kannu Sir for making it worthwhile! If you are looking for a mentorship program to get onboard as an upstream developer in OpenStack, worry not because there are plenty of ways. The weekend prior to OpenStack Summits there’s a two-day Upstream Institute Session – an intensive program designed to share knowledge about the diverse ways of contributing to OpenStack. Also, women of OpenStack run Speed Mentoring session at the OpenStack Summits which is a great icebreaker to get to know new and experienced people in the OpenStack community. Nonetheless, if you want to start by working remotely, then keep an eye on the weekly meetings and discussion going around the IRC channels. You may want to check out about Outreachy OpenStack internship which helps people from groups underrepresented in free and open source software get involved and runs twice every year. Outreachy has very good projects with great mentors.

But, the story of surfing the exponential wave doesn’t end here. Paying it forward at the Speed Mentoring Session at the OpenStack Summit in Boston was an unmatchable experience. Another enriching journey which excites me to the core began a few days back, the Outreachy internship May-Aug 2017 round in which I am a co-mentor for the Keystone Documentation Auditing project. It’s been said that you never really learn something until you teach it.  So, take out some time to work and develop yourself after the mentee phase is complete and once you feel more confident, pull yoursef up and make a switch to the mentor phase.

We tend to think about a mentorship program only from the mentee’s perspective but a little pondering will show that the returns from a mentor’s side are great career boosters as well. Few good pointers worth noting, are listed below.

  • Refresh your knowledge. Mentoring someone can provide a greater perspective and clarity about what you already know.
  • Hone your leadership, management and communication skills.
  • Build your credibility and reputation as a role model.

Mentoring does not necessarily mean that you must take out a lot of time from your already hectic schedule. All it means is that you spend quality time with the mentee because every interaction counts.

To wind up I would say, whether you become a mentor or a mentee, it’s a classic win-win situation. Now that the facts are clear, choose a role that brings you a step closer to your goals. Would love to hear your takeaways from the journey in the comments section!


Nisha Yadav | The Girl Next Door | 2017-06-13 12:00:10

The kernel development cycle has evolved so beautifully overtime that it has set an example in the open source world. Having contributed to the kernel I actually enjoyed learning about the whole development cycle. Terms like mainline kernel, rc, stable release, long-term support confused me a lot initially but with time I understood at least the basic work-flow.

Keeping in the mind the volume of code that sits inside the kernel, it is very difficult for a single person to inspect each and every part of the project perfectly. Hats off to Linus and people like Greg. To make the process easier (it looks easy :P), the kernel is broken down into subsystems with each subsystem having its own main developer or as generally said top level maintainer. These maintainers decide which patch goes to the mainline kernel. Along with these top level maintainers there are many file system and driver maintainers who review the patches before the subsystem maintainers accept it. These maintainers can also send a pull request to the subsystem maintainer if the subsystem is again divided into sub-sub-systems. And that’s the reason the whole kernel is built on chain of trust. Finally all the patches collected by the subsystem maintainers goes to Linus in a pull request. So, to get your patches into the kernel send it directly to the maintainers and this is when the get_maintainers.pl script comes handy.

Subsystem maintainers collects patches ahead of time and send them to Linus when the merge window opens. To explain more technically, let’s assume that the development cycle for version 5.1 has ended and it has released. As soon as 5.1 is released a new development cycle for 5.2 version starts and the merge window opens. Merge window is the starting point of the development cycle where the code that is considered to be sufficiently stable gets merged into the mainline kernel by Linus. All the new features for the next kernel version goes in this merge window. This is usually a two week process and the maintainers send PR to Linus and Linus merges them to the mainline kernel. But this is not how a new version gets released.

At the end of two weeks Linus declares that the merge window has closed and now the process of stabilizing the mainline kernel begins. At this point some features are still untested and unstable and some bug fixes have to be made to prepare the kernel for the next stable release. Closing the merge window means that now is the time to start releasing the -rc kernels. Over the next 8–9 weeks, the developers work continuously on the -rc versions. Every week a new -rc is released with the name 5.2.rc1, 5.2.rc2.. and so on. These -rc versions are improvements or bug fixes to the mainline kernel that got built during the merge window. During these 8–9 weeks only patches including some improvements goes to the mainline. No more features are added after merge window closes. Finally after the end of these -rc weeks the kernel is stable and ready for release as version 5.2. So, the whole development cycle is a matter of around 10–12 weeks and we get a new version in every three months.

The whole cycle goes like : (Sorry for not giving a graphical representation.)

5.1 released → Merge window for 5.2 opens for 2 weeks→ Changes staged for 5.2 goes to the mainline kernel → Window closes and unstable 5.2 is ready for fixes → Week 1 (5.2.rc1) → Week 2(5.2.rc2) → …. → Finally 5.2 released after around 10–12 weeks.

Note:

The merge window is a very busy time for the developers and they avoid new non-urgent patches due to lack of time during these 2 weeks. I have encountered one case myself where the developer told me not to send non-urgent cleanup patches during this time. During the merge window they concentrate only on the stuff for the upcoming rc1 and these patches might annoy them. But they are happy to take them as soon as the -rc1 week starts.

Happy Linux coding :)

Bhumika Goyal | Stories by Bhumika Goyal on Medium | 2017-06-13 05:27:09

(Follow up: Get a list of speakers/titles)

This afternoon women reps from various parts of Mozilla (including HR/Recruitment, Engineering management, Culture) held a Q&A session. There was tons of info and advice but I will try to give brief highlights:

Don’t take things too personally.

Reflect on successes and failures. Decide what you learned from both.

Get a lot of buy in when making really big decisions with large impact.

Go with your guts about people.

Make decisions that give yourself more options.

One manager remarked that her style was to have great personal relationships with people and really tried to get to know people on their level.

After the internship, when job hunting and you get an offer:

     * Negotiate your salary

     * Listen to their offer, tell them thank you and that you’d like to take 2 days to discuss it with your people. Afterward, make your counter offer. Do not take the initially offered salary.

Seek mentors.

Have confidence in your own abilities. Don’t be scared to ask questions.


Carol Chung | CarroTech | 2017-06-13 04:50:03

The short and sweet version :)

The IDR API is charged with the allocation of integer ID numbers used with device names, POSIX timers, and more.

void idr_preload(gfp_t gfp_mask);
int idr_alloc(struct idr *idp, void *ptr, int start
, int end, gfp_t gfp_mask);
void idr_preload_end(void);

idr_preload() function is charged with allocating the memory necessary to satisfy the next allocation request. A call to idr_alloc() allocates an integer ID. It accepts upper and lower bounds for that ID to accommodate code that can only cope with a given range of numbers — code that uses the ID as an array index, for example. If need be, it will attempt to allocate memory using the given gfp_mask. Allocations will be unnecessary if idr_preload() has been called.

Looking up an existing ID is achieved with:
void *idr_find(struct idr *idp, int id);
The return value will be the pointer associated with the given id, or NULL
otherwise.

To deallocate an ID, use: void idr_remove(struct idr *idp, int id);

With these functions, kernel code can generate ID numbers to use as minor
device numbers, inode numbers, or in any other place where small integer IDs are useful, for instance file descriptors.

The longer version:

An application of the IDR system is to associate an integer with a pointer. For example, in the IIC bus, each device has its own address; to find a particular device on the bus, we must first find the device address.

To access a bus on the IIC device, we must know the ID and use that to describe the structure of the device in the kernel. If one were to use an array to store the structure and the array index were to correspond to the integer ID, once the ID is in higher numbers, a lot a memory will be occupied. If instead we were to use a list, finding the structure would not be time efficient. Hence, IDR comes to the rescue!

The internal implementation of IDR is done using a radix tree, and thus it is very convenient to associate an integer and pointer, along with a high search efficiency.

Two, related structures:

struct idr {
struct idr_layer __rcu *hint; //A recent memory pointer data structures of the idr_layer
struct idr_layer __rcu *top; //IDR idr_layer tree top, the root of the tree
struct idr_layer *id_free; //Pointer to a idr_layer free list
int layers; //The idr_layer layer in the IDR tree number
int id_free_cnt; //The number of idr_layer idr_layer in the remaining free list
int cur; //current pos for cyclic allocation
spinlock_t lock;
};
struct idr_layer {
int prefix; //the ID prefix of this idr_layer
DECLARE_BITMAP(bitmap, IDR_SIZE); //Mark ary bitmap, an array of tag the idr_layer usage
//The array is used to save the pointer data specific to idr_layer or sub structure, the size of 1<<6=64
struct idr_layer __rcu *ary[1<<IDR_BITS];
int count; //Ary array using count
int layer; //The layer number
struct rcu_head rcu_head;
};

Initalisation in IDR API:

The function idr_init_cache() is called inside the start_kernel function. idr_init_cache() allocates a slab cache and is used for the allocation of idr_layer structure.

static struct kmem_cache *idr_layer_cache;
void __init idr_init_cache(void)
{
idr_layer_cache = kmem_cache_create(“idr_layer_cache”,sizeof(struct idr_layer), 0, SLAB_PANIC, NULL);
}

There are two ways to create a new IDR structure:

  1. Macro definition and initialization of a named name IDR:
    #define DEFINE_IDR(name) struct idr name = IDR_INIT(name)
#define IDR_INIT(name) \
{ \
.lock = __SPIN_LOCK_UNLOCKED(name.lock), \
}

2. Dynamic initialization of IDR:

    void idr_init(struct idr *idp)
{
memset(idp, 0, sizeof(struct idr));
spin_lock_init(&idp->lock);
}

Allocation

void idr_preload(gfp_t gfp_mask);
int idr_alloc(struct idr *idp, void *ptr, int start, int end, gfp_t gfp_mask);
void idr_preload_end(void);

idr_preload() is charged with allocating the memory necessary to satisfy the next allocation request. A call to idr_alloc() allocates an integer ID. It accepts upper and lower bounds for that ID to accommodate code that can only cope with a given range of numbers — code that uses the ID as an array index, for example. If need be, it will attempt to allocate memory using the given gfp_mask. Allocations will be unnecessary if idr_preload() has been called. When used inside preloaded section, the allocation mask of preloading can be assumed.

Integer and the pointer associated with it

IDR allocates integers relatively simply. IDR_BITS=6, so the first level in the IDR tree (or the root) has 64 slots. Let’s assume we have a two level structure, top refers to 64 tree roots and in the next level are 64 leaf nodes corresponding to each slot in the root. In the leaf layer idr_layer ary array element is used to point to the target obj. Then the two levels can control 64*64=4096 objects.

Find a pointer corresponding to an ID

Let us assume a two level tree. Now if we were to find the pointer corresponding to the ID 65, then 74/64 =1, so starting from the top, we move to top->ary[1]. Next we need to find the leaf node. The leaf node index is ID&IDR_MASK, which is 10. So the ary[10] object in the second level points to the pointer to this ID.


static inline void *idr_find(struct idr *idr, int id)
{
    //Retains the last operation before the idr_layer pointer
struct idr_layer *hint = rcu_dereference_raw(idr->hint);
    //Assign a proper ID in the IDR idr_layer tree, idr_layer path   .     and distribution records in the PA array
rv = idr_get_empty_slot(idp, starting_id, pa, 0, idp);
if (rv <0)
return rv == -ENOMEM ? -EAGAIN : rv;
    //Correlation of PTR and ID
idr_fill_slot(idp, ptr, rv, pa);
*id = rv;
return 0;
}

Replace pointer for an ID

Sometimes, one might want to update the pointer corresponding to an ID. This essentially is looking up the location corresponding to the ID followed by updating the pointer in that location.

void *idr_replace(struct idr *idp, void *ptr, int id)
{
int n;
struct idr_layer *p, *old_p;
    if (id <0)
return ERR_PTR(-EINVAL);
//Find the IDR top pointer
p = idp->top;
if (!p)
return ERR_PTR(-EINVAL);
//According to the IDR layer, set the corresponding bits
n = (p->layer+1) * IDR_BITS;
    if (id >= (1 <<n))
return ERR_PTR(-EINVAL);
//From the tree top top, to find out the leaves, array ID in the corresponding data pointer
n -= IDR_BITS;
while ((n > 0) && p) {
p = p->ary[(id >> n) & IDR_MASK];
n -= IDR_BITS;
}
//Find the leaf address
n = id & IDR_MASK;
if (unlikely(p == NULL || !test_bit(n, p->bitmap)))
return ERR_PTR(-ENOENT);
//ID ary arrays, pointer replacement
old_p = p->ary[n];
rcu_assign_pointer(p->ary[n], ptr);
return old_p;
}

Remove assignment of a pointer from an ID

To remove the assignment of a pointer from an ID we need three steps:

  1. Look up the pointer in the tree.
  2. Remove assignment to that pointer (by traversing the an idr_layer array inside sub_remove() and releasing the idr_layer structure).
  3. If there is an opportunity to shrink the tree, remove the entire level.
void idr_remove(struct idr *idp, int id)
{
struct idr_layer *p;
struct idr_layer *to_free;
        if (id <0)
return;
//Idr_layer path ID releases corresponding space
sub_remove(idp, (idp->layers — 1) * IDR_BITS, id);
if (idp->top && idp->top->count == 1
&& (idp->layers > 1) && idp->top->ary[0])
{

/*
* Single child at leftmost slot: we can shrink the tree.
* This level is not needed anymore since when layers are
* inserted, they are inserted at the top of the existing
* tree.
*/
to_free = idp->top;
p = idp->top->ary[0];
rcu_assign_pointer(idp->top, p);
--idp->layers;
to_free->count = 0;
bitmap_clear(to_free->bitmap, 0, IDR_SIZE);
free_layer(idp, to_free);

}
}

Gargi Sharma | Stories by Gargi Sharma on Medium | 2017-06-12 17:47:27

In this post, I’ll be explaining the functions used for PID lookup in the Linux kernel. The first function is next_pidmap(), which finds the first set bit in the current pidmap or the successor pidmaps corresponding to a namespace ns. Initially, there is a sanity check on the range to see if we have not exceeded the PID_MAX_LIMIT. Inside the for loop, we traverse from the current pidmap till the end of the pidmaps.

    int next_pidmap(struct pid_namespace *pid_ns, unsigned int last)
{
int offset;
struct pidmap *map, *end;
      if (last >= PID_MAX_LIMIT)
return -1;
      offset = (last + 1) & BITS_PER_PAGE_MASK;
map = &pid_ns->pidmap[(last + 1)/BITS_PER_PAGE];
end = &pid_ns->pidmap[PIDMAP_ENTRIES];
for (; map < end; map++, offset = 0) {
if (unlikely(!map->page))
continue;
offset = find_next_bit((map)->page, BITS_PER_PAGE, offset);
if (offset < BITS_PER_PAGE)
return mk_pid(pid_ns, map, offset);
}
return -1;
}

The function find_next_bit returns the bit set as 1 in the current pidmap. If the offset is inside the BITS_PER_PAGE limit, meaning there is an entry in the pidmap which is set as 1, we return that pid. The unlikely provides hints to the compiler to adjust branch predictions.

find_pid_ns() is used to find the pid structure for the pid number nr. We iterate over the hash list whose head is returned by the pid_hash function. If found we return the struct pid that is related to the arguments nr and ns passed as parameters to the function.

    struct pid *find_pid_ns(int nr, struct pid_namespace *ns)
{
struct upid *pnr;

hlist_for_each_entry_rcu(pnr,
&pid_hash[pid_hashfn(nr, ns)], pid_chain)
if (pnr->nr == nr && pnr->ns == ns)
return container_of(pnr, struct pid,
numbers[ns->level]);

return NULL;
}

find_vpid() finds the pid by its virtual id, i.e. in the current namespace.

    struct pid *find_vpid(int nr)
{
return find_pid_ns(nr, task_active_pid_ns(current));
}

find_get_pid looks up a PID in the hash table, and returns the struct pid with its count elevated. The count is incremented inside the get_pid function by calling atomic_inc(&pid->count);

    struct pid *find_get_pid(pid_t nr)
{
struct pid *pid;

rcu_read_lock();
pid = get_pid(find_vpid(nr));
rcu_read_unlock();

return pid;
}

find_ge_pid() returns the first allocated pid greater than or equal to. It uses two functions find_pid_ns and next_pidmap decribed above to do this. The function find_ge_pid() is used by fs/proc/base.c to find the pid greater than the pid “nr” passed as the argument to the function.
If there is a pid at nr this function is exactly the same as find_pid_ns.

    struct pid *find_ge_pid(int nr, struct pid_namespace *ns)
{
struct pid *pid;
do {
pid = find_pid_ns(nr, ns);
if (pid)
break;
nr = next_pidmap(ns, nr);
} while (nr > 0);
      return pid;
}

Gargi Sharma | Stories by Gargi Sharma on Medium | 2017-06-12 16:57:00

Process ID

Every process has a unique identifier which it is represented by, called as the process ID(pid). The first process that the kernel runs is called the idle process and has the pid 0. The first process that runs after booting is called the init process and has the pid 1.

The default maximum value of the pid on a linux 32678. This can be checked by running the command:

$cat /proc/sys/kernel/pid_max

This default value ensures compatibility with older systems which used 16 bit types for process IDs. One can increase the maximum pid value by writing the the number to /proc/sys/kernel/pid_max. This ensures larger pid space at the expense of reduced compatibility. On 64-bit systems, pid_max can be set to any value up to 2²² (PID_MAX_LIMIT, approximately 4 million).

Process ID Namespaces

PID Namespaces isolate the process ID number space, meaning that
processes in different PID namespaces can have the same PID. PID
namespaces allow containers to provide functionality such as
suspending/resuming the set of processes in the container and migrating the container to a new host while the processes inside the container maintain the same PIDs. PID namespaces are hierarchically nested in parent-child relationships. Within a PID namespace, it is possible to see all other processes in the same namespace, as well as all processes that are members of descendant namespaces. Processes in a child PID namespace cannot see processes that exist (only) in the parent PID namespace (or further removed ancestor namespaces). A process will have different PID in each of the layers of the PID namespace hierarchy starting from the PID namespace in which it resides through to the root PID namespace. Calls to getpid() always report the PID associated with the namespace in which the process resides.

Process ID Structure

struct upid {
int nr;
struct pid_namespace *ns; /* the namespace this value
* is visible in
*/
struct hlist_node pid_chain;
};
struct pid {
atomic_t count;
unsigned int level;
/* lists of tasks that use this pid */
struct hlist_head tasks[PIDTYPE_MAX];
struct rcu_head rcu;
struct upid numbers[1];
};

This structure contains the ID value, the list of tasks having this ID, the reference counter and the hashed list node to be stored in the hash table for a faster search.

How are the process IDs allocated?

The two main functions called inside the function alloc_pid(namespaces) are kmem_cache_alloc(…) and alloc_pidmap(…)

...
pid = kmem_cache_alloc(ns->pid_cachep, GFP_KERNEL);
...

The alloc_pid function calls the kmem_cache_alloc function, with the cache for the namespace as the parameter. kmem_cache_alloc keeps a cache of pre-allocated structures. Since, the pid allocation is frequently done, instead of allocating pid struct from the main memory(kmalloc), we keep a cache, and when a new pid struct is required, kmem_cache_alloc returns the address of a block that was already allocated.

struct pid *alloc_pid(struct pid_namespace *ns)
{
...
for (i = ns->level; i >= 0; i--) {
nr = alloc_pidmap(tmp);
if (nr < 0)
goto out_free;
pid->numbers[i].nr = nr;
pid->numbers[i].ns = tmp;
tmp = tmp->parent;
}
...
}

A process will have one PID in each of the layers of the PID namespace hierarchy starting from the PID namespace in which it resides through to the root PID namespace. Hence, once alloc_pid gets the pid structure, we are required to assign the process pid in all namespaces. We iterate through all namespaces and call alloc_pidmap with the namespace as the parameter.

struct upid is used to get the id of the struct pid, as it is seen in particular namespace. Each upid instance is put on the PID hash list.

struct pid *alloc_pid(struct pid_namespace *ns)
{
...
for ( ; upid >= pid->numbers; --upid) {
hlist_add_head_rcu(&upid->pid_chain , &pid_hash[pid_hashfn(upid->nr
, upid->ns)]);
upid->ns->nr_hashed++;
}

alloc_pid function, which is tasked with allocating the pids, makes a call to the function alloc_pidmap which searches for the next pid in the bitmap.

static int alloc_pidmap(struct pid_namespace *pid_ns)
{
int i, offset, max_scan, pid, last = pid_ns->last_pid;
struct pidmap *map;
        pid = last + 1;
if (pid >= pid_max)
pid = RESERVED_PIDS;
...
}

last is the last pid that was allocated in this namespace. We assign the next pid serially. PIDs are allocated in the range of (RESERVED_PIDS,
PID_MAX_DEFAULT).

if (unlikely(!map->page)) {
void *page = kzalloc(PAGE_SIZE, GFP_KERNEL);
/*
*Free the page if someone raced with us
*installing it:
*/
spin_lock_irq(&pidmap_lock);
if (!map->page) {
map->page = page;
page = NULL;
}
spin_unlock_irq(&pidmap_lock);
kfree(page);
if (unlikely(!map->page))
return -ENOMEM;
}

If no memory has been allocated to the bitmap, allocate it the memory using kzalloc. If the bitmap is still not allocated memory (checked using unlikely function), return a No Memory Available(ENOMEM) error.

if (likely(atomic_read(&map->nr_free))) {
for ( ; ; ) {
if (!test_and_set_bit(offset, map->page)) {
atomic_dec(&map->nr_free);
set_last_pid(pid_ns, last, pid);
return pid;
}
offset = find_next_offset(map, offset);
if (offset >= BITS_PER_PAGE)
break;
pid = mk_pid(pid_ns, map, offset);
if (pid >= pid_max)
break;
}
}

Here, we keep iterating through the bitmap until we find a pid or the offset exceeds the pagesize or if pid reaches the PID_MAX_DEFAULT limit.

static int alloc_pidmap(struct pid_namespace *pid_ns)
{
...
if (map < &pid_ns->pidmap[(pid_max-1)/BITS_PER_PAGE]) {
++map;
offset = 0;
} else {
map = &pid_ns->pidmap[0];
offset = RESERVED_PIDS;
if (unlikely(last == offset))
break;
}
pid = mk_pid(pid_ns, map, offset);
...
}ea

In the previous section, when the offset exceeds the page size, we move onto a new bitmap with the offset 0. Otherwise, if the pid reached the maximum default limit, pid assignment wraps around. Finally, we get the pid value from the bitmap using the function mk_pid.

Gargi Sharma | Stories by Gargi Sharma on Medium | 2017-06-12 16:12:48

The DIY Life officially marks my third portfolio project in the Flatiron School’s Learn Verified Program. I feel like I’m moving at a snails pace but I am learning. The DIY Life was exciting (and challenging) to create because I absolutely love anything/everything Do-It-Yourself. In the program, we’re always encouraged to create something that we genuinely like and now I understand why. I had this big vision of what I wanted the site to look like and what functionality a user would have. Then, there came making that vision come to pass with guidelines that included building nested forms, nested resources, scope methods, user authentication and the list went on.

Rails Assessment Requirements:

  • Use the Ruby on Rails framework.
  • Your models must include a has_many, a belongs_to, and a has_many :through relationship.
  • Models should include reasonable validations for the simple attributes.
  • Include at least one class level ActiveRecord scope method.
  • Include a nested form that writes to an associated model through a custom attribute writer.
  • Your application must provide a standard user authentication, including signup, login, logout, and passwords.
  • Your authentication system should allow login from some other service.
  • You must make use of a nested resource with the appropriate RESTful URLs.
  • Your forms should correctly display validation errors.
  • Your application must be, within reason, a DRY (Do-Not-Repeat-Yourself) rails app.
https://medium.com/media/c3a19c667d4ba71b09a1ccb5d17c533e/href

Learn.co prepared me for this. There’s a plethora of lessons, code-alongs, and videos covering these very topics. I just didn’t know where to start or how I would create this vision I had in my head. So, prior to jumping in I got on draw.io and began to think about what tables I wanted and the associations(this still trips me up) they should have.

Afterwards, I set up Omniauth and Devise for users and views. I made use of the DotEnv gem to load environment variables from .env. Devise assists with Omniauth set-up so I knew it’d be a breeze, NOT! Things got nasty and bugs ran rampant. I wanted users to signup with a name field in the view. I could not figure out what I was doing so wrong. That is until I read the Devise documentation and slapped myself for not throughly reading it prior. If you want to customize and add a new attribute you must add a before action to the application controller that’ll configure parameter sanitization and permit additional parameters.

application_controller.rb

Next, I began working more on how my associations would work and creating a nested resource. When a user successfully signs up they are able to complete all CRUD actions in regards to creating a project. They can also view projects created by other users but cannot make any changes to those projects.

routes.rb
project.rb

Users are able to add an inspirational picture to their project which was possible using the Paperclip gem. Following along with the documentation this time made it go smoothly. Well, almost. I forgot to include some code in the views but all was well after that!

I think one of the biggest problems I got hung up on was the styling. Literally, just getting positioning correct was a TASK. I was not expecting that and I didn’t want to lose focus from implementing functionality and understanding how things were working. I ended up using a bootswatch theme and the fonts awesome rails gem. I’m pleased with the aesthetics.

Check out a video walkthrough of The DIY Life below:

https://medium.com/media/9ddd64476f21e8a4efeda1b6f6068077/href

You can view the rest of my code here. Next up…Javascript!

Mary Umoh | Stories by Mary Umoh on Medium | 2017-06-12 02:09:45

Time is of the essence. There's no time like the present. Time is money. Time flies when you're having fun. My time at Mozilla is now at two weeks and a lot has transpired. In this short time I’m seeing and learning things that I’ve never come across before, naturally. For the next 3 months, I’ll be working closely with my mentors to reskin and make improvements to bugzilla.mozilla.org. From set up to being apart of a real development team meeting, let me fill you in on some of what’s been happening.

Setup:

After receiving my laptop the local environment needed to be set up. If you're anything like me this isn't the part that makes you feel warm and fuzzy inside. It was everything I expected it to be: challenging. From creating new Github SSH keys, installing virtualbox, vagrant, homebrew, and running command after command in the terminal I was finally up and running. It only took about…all day.

https://medium.com/media/24fd79e41acfce310b7e1547f96404f2/href

Fixing my contribution:

My goals for week one aside from set up was to fix my initial contribution. My patch had extra files and trailing spaces that needed removal in order to be approved. I was able to fix that and send in a pull request. Well, at least I thought I fixed it. My PR had some extra characters that I somehow did not see(don’t you hate when that happens) but after it was pointed out and fixed it got marked resolved.

Week 2:

In week 2, I was introduced to bug 1369872. This bug is meant to unify CSS files. The index.cgi file loads ‘skins/standard/index.css’ but it also unfortunately loads ‘skins/contrib/Mozilla/index.css' which gets merged into an assets folder. Once I began working on the bug and viewing changes in the VM I saw immediate errors. Talking with my mentor showed me that simply moving files would not suffice. References to these lines of code needed to be changed s well. CSS override rules caused images to break, text to become misaligned and font to change. Needless to say things got sticky quickly. Currently, I’m still working and making changes to this bug for it to be resolved.

Mary Umoh | Stories by Mary Umoh on Medium | 2017-06-12 02:09:16

Introduction to Web Accessibility

Outreachy

Some background: GNOME’s Outreachy internship program is targeted at underrepresented groups in tech.

A variety of open-source companies are involved.

An employee of those companies can volunteer to be a mentor, and they design an internship with a particular topic and goal.

Those who are both interested and eligible submit applications for that internship.

The goal of the internship I was selected for is to automate regression testing for web accessibility.

My mentor, Matt Brandt, is a Senior Test Engineer on the Firefox Test Engineering team.

At a StarCanada conference, Brandt attended a presentation on Web Accessibility.

The talk inspired him to design this internship, and contribute to making the web more accessible.

What is Web Accessibility?

Web Accessibility refers to how easily an atypical user can access and use websites and web applications.

This includes people who have difficulty using a mouse or touchpad, those who have impaired sight or lack it altogether, and those who can use neither a mouse nor a keyboard.

Many people in these groups rely on screen readers or other assistive technologies.

Audio may be their only option to learn what a website is about, what it contains, and how to make use of it.

As it turns out, many resources on the web are incompatible with screen readers, and may be difficult or impossible to for some people to use.

Click here for more information on designing for screen reader use.

Web accessibility is a very broad topic, as you can infer from taking a look at the last Web Content Accessibility Guidelines, or WCAG 2.0.

However, there are a few simple changes that can be made in how sites are developed that will significantly improve the experience for users with disabilities.

 

As I am writing this, I am well aware that my own websites fall short of this standard, which outlines the biggest issue with web accessibility:

Most developers simply don’t know what accessibility is, how to design for it, or why it is important.

 

At this point, testing sites for accessibility is still largely a manual process.

While not all aspects of accessibility testing can be automated, the goal of this internship is to simplify and automate as much as I can.

Over the next three months, my hope is that I can make a significant impact on both web accessibility and automated testing.

Kimberly Pennington | Kimberly the Geek | 2017-06-11 21:18:57


I come into this internship with an extensive knowledge of Wine and the AppDB as a user (9 years) and little in the way of technical skills. What I know is what I've picked up over the years as a Linux user, which, granted, is much more than what I would have learned had I stayed with Windows.

But in terms of what's needed to wrestle with the AppDB code, it's not much: basic html and css, an understanding of how relational databases work from a job years ago where I had to run queries using a now-defunct language on an IBM mainframe, and (thanks to having run regression tests on Wine) how to use git. I had no training or experience in Apache, MySQL, PHP, Javascript, JQuery, or Bootstrap prior to being accepted as an intern.

I spent the month leading up to the official start of the internship setting up a local test environment, and with the help of a mentor who's been very patient with my newbie questions, getting better at troubleshooting Apache and PHP errors.

The learning curve so far has been about what I expected. MySQL has been the easiest because I do understand how database queries work and just have to look up the specific MySQL syntax for what I want to do. PHP has been the hardest, in part because the AppDB code is poorly organized and difficult to follow even for someone who knows PHP. 

I have nonetheless managed to fix some small bugs, and learned a few things. I expect the learning curve to get steeper before it gets easier.















Rosanne DiMesio | Notes from an Internship | 2017-06-10 08:08:55

This is an ongoing post where I’ll share resources I find for learning or productivity. When updates are made to a section, the section heading will be marked in a color and marker (new or updated). Also links will be ordered from newest to oldest

Web Dev

JavaScript

 

Web Design

This is a really nice codepen by Angela V for custom radio buttons (css)

 

Productivity

Creating command line shortcuts for git commands

 

Communication

Tips for embedding codepen samples on wordpress sites

  • I’m using oEmbed so I just copy the url of the codepen into an empty line in my wordpress.com site editor

 

Internships Advice (new)

https://www.pwc.com/us/en/careers/campus/internships/make-the-most-of-your-internship.html

http://www.businessinsider.com/common-mistakes-interns-make-2014-6

https://hbr.org/2016/07/6-ways-to-make-the-most-of-your-internship

https://www.forbes.com/sites/susanadams/2014/06/18/how-to-make-the-most-of-your-internship-3/#27ac174e6f5c

https://www.forbes.com/sites/karimabouelnaga/2016/07/07/5-tips-to-make-the-most-of-your-summer-internship/2/#11bb7c5d6d3b


Carol Chung | CarroTech | 2017-06-09 19:18:40

Nftables provides filtering and classification of packets. It can be configured using nft userspace command line tool. It replaces iptables, ip6tables, arptables and ebtables.

There are different ways to install it. Here we will install through source package.
First, you will need to clone the repositories of libmnl and libnftnl, which are netlink  userspace libraries.
$ git clone git.netfilter.org/libmnl
$ git clone git.netfilter.org/libnftnl


Build libmnl using the commands below.
$ cd libmnl
$ ./autogen.sh
$ ./configure
$ make
$ make install
libnftnl can be built in the similar way.

We also require libgmp and libreadline packages to successfully compile nftables. Install them by typing:
$ sudo dnf install libgmp libreadline

After installing dependencies, clone the nftables repository and build using the commands below.
$ git clone git://git.netfilter.org/nftables
$ cd nftables
$ ./autogen.sh
$ ./configure
$ make
$ make install

Now, let’s check nft by typing:
$ nft
nft: no command specified

The installation is successful, if you get the same output as above.

Don’t worry if you get error as:
nft: error while loading shared libraries: libnftnl.so.7: cannot open shared object file: No such file or directory
To solve it, check this stack overflow solution.

Let’s try out some nft commands.
First command adds table filter of type ip. There are different types of table: ip, arp, ip6, bridge, inet and netdev. The second command lists the table added.
$ nft add table ip filter
$ nft list table filter
table ip filter {
}

Next, we need to add chains to group rules. Chains are of types:
1. Base chain include hook names and are enclosed in the curly braces.
Filter, route and nat are three possible base chains.
$ nft add chain ip filter base { type filter hook input priority 0 \; }

2. Non base chains are not attached to any hooks and does not see any traffic by default.
$ nft add chain ip filter nbase

Rules are used to specify action to be taken on packets. Each rule has an expression to match packets with and one or multiple actions when matching.
$ nft add rule filter base ip saddr 127.0.0.1 accept
$ nft add rule ip filter nbase tcp dport http counter
$ nft list ruleset
table ip filter {
chain base {
type filter hook input priority 0; policy accept;
ip saddr 127.0.0.1 accept
}
chain nbase {
tcp dport http counter packets 0 bytes 0
}
}
The third command lists the ruleset. With these rules, source packets of address 127.0.0.1 are accepted and http packets counter is incremented.

These rules can be saved in a file. Here, we will first store the ruleset in a file rulesnft. Then delete all the ruleset added and from the stored file retrieve it.
$ nft list ruleset > rulesnft
$ nft flush ruleset
$ nft -f rulesnft

Each rule is assigned an unique handle number and can be used to delete specific rule.
$ nft list ruleset -a

Replace n with the handle number obtained and then delete by typing:
$ nft delete rule filter base handle n

Advanced data structures such as sets, maps and many other features are available in nftables. Refer the links below to learn more about them.

https://wiki.nftables.org/wiki-nftables/index.php/Main_Page
https://developers.redhat.com/blog/2016/10/28/what-comes-after-iptables-its-successor-of-course-nftables/#more-428337
https://linoxide.com/firewall/configure-nftables-serve-internet/


Varsha Rao | Varsha's Blog | 2017-06-09 17:44:57

Lightbeam is a key tool for Mozilla to educate the public about privacy.

In this blog post, you will know why I chose Lightbeam and what does it do.

As I was browsing through the Outreachy project list, Lightbeam caught my attention for the following reasons:

  • front-end web development (all things JavaScript)
  • visualisations (D3.js)
  • a project from Mozilla (healthy, open and accessible Internet for all people)

But I was not sure about the following:

  • web privacy
  • security engineering

JavaScript topped the list and I decided to give it a try. I must say, I am now very cautious about the online third party trackers and care for web privacy and security.

The key part of this project (internship) is to convert the existing Firefox add-on to a web extension and explore simpler ways to convey complex privacy and security concepts to all  Firefox users.

Web security & privacy are vast topics  and after playing around with a few keywords, I found these two interesting papers online:

A healthy internet is secure and private. While web tracking isn’t 100% evil (personal data can make your browsing more efficient; cookies can help your favourite websites stay in business), its workings remain poorly understood. Using interactive visualisations, Lightbeam’s main goal is to show web tracking, aka, show the first and third party sites you interact with on the Web.

Your personal information is valuable and it’s your right to know what data is being collected about you – your age, income, family’s ages and income, medical history, dietary habits, favourite web sites, your birthday…the list goes on. The trick is in taking this data and shacking up with third parties to help them come up with new ways to convince you to spend money, sign up for services, and give up more information.  It would be fine if you decided to give up this information for a tangible benefit, but you may never see a benefit aside from an ad, and no one’s including you in the decision.

One key area Lightbeam can help you in making the difference is in user control – deciding who can collect your data.

Lightbeam is your guide to help you surf the web while keeping your privacy intact.

 

 

 


Princiya Marina Sequeira | P's Blog | 2017-06-09 13:38:14

Welcome back 😀

Week 4/15. Things are going well. I completed documenting the sections that were planned for the first two weeks of the internship period (Introduction and installtion of required softwares) and I am currently writing about concepts like DOM, HTML, JavaScript. I keep updating my mentors about the sections I write and till now they have approved the content. I just created a PR for Oxford Reference translator. I need some inputs from Zotero community on it and I hope it gets merged by the coming week after all the updation that they’ll suggest (because they always have ideas to get things done in a better way).

Our dear chatroom went through some heath issues (technical problems) this week and couldn’t play a part in the communications we made. My mentor suggessted the steps that I could take to bring it back to life. The developers of WMF took a great care of it by making it a “High” priority. A mail this morning greeted me with the good news from my mentor informing me that the chatroom was fit and fine now. Its good to have the non-living member back on the team 😀😀

Note: Find my work here.

Summers 2017: You must never forget the reasons behind your actions that brought you where you are.


Sonali Gupta | It's About Writing | 2017-06-08 18:59:13

So this week I’m all about the unknown field – the PHP server.
I had to implement an API for the new notification that I am writing,
and it has to be in the existing mechanism.

I have started to read the code base of the relevant extension, red and red
and had a lot of open questions. Started to be a bit nervous.

I kept reading and reading, and just decided to give it a try, for the worst case I will delete my code 🙂

I have ended up with a semi-working solution (not completed the task, but it is a start!), and of course as always with codes’ project – more questions were piled up.

Hope to establish more progress the upcoming week so I will finish this API…


Ela Opper | FoxyBrown | 2017-06-08 17:32:48

What I learned in my first two weeks as a web development intern at Mozilla.

Bianca Danforth | Upgrade Lightbeam Blog | 2017-06-08 12:29:09

Take the first step in faith. You don’t have to see the whole staircase, just take the first step. -Martin Luther King, Jr.

Hello everyone, I am Prachi Agrawal, a 3rd year undergraduate student at International Institute of Information Technology, Hyderabad. This is my first attempt at writing and coincidentally today happens to be my birthday :D

On 5th May 2017, while we were all celebrating my parents’ silver jubilee wedding anniversary I got to know that I was accepted for the Outreachy program as an intern in Sugar Labs from May to August. I was extremely happy and excited about it.

Talking about Outreachy, this program provides an amazing opportunity to those interested in contributing to FOSS. My journey with Open source started back in my first year of college when we were first introduced to the version control software ‘Git Hub’. Soon after I started exploring various open source projects and finally found one which was aligned with my interest with Sugar Labs for May 30, 2017-August 30, 2017 round of Outreachy internships.

Coming over to the contributions, I started looking over the existing ‘issues’ related to Music Blocks and picked the ones that I thought I could solve. Music Blocks has thousands of lines of code. Playing with such a huge code base seems to be a nightmare. Also understanding the entire code at once is not required. Spending sometime over a few relevant segments of the code is all what you need. So I picked an issue and wrote a few lines of code to fix it and made the Pull request. It gave me utter joy when my first PR got merged. It motivated me to contribute further. I went on to solve a few more issues and finally wrote a proposal and it got accepted :D

About my project, I am working on “Timbre widget for Music Blocks” along with Tayba Wasim, my fellow project member with Devin Ulibarri and Walter Bender as my mentors. Music Blocks is essentially a tool that helps children learn music. Currently there are multiple palettes present in Music Blocks window that provide the opportunity to generate different types of music.. One of them is the “widget” palette. I and Tayba have to add another widget called ‘Timbre’ in that palette that would alter the timbre of the sound being generated.

For the last three weeks, we were in the community bonding period. Now finally the coding has begun. We meet thrice a week to keep everyone updated with the progress that we make and also to discuss and resolve the issues that we face. The entire community is very helpful.

Currently I am working on the interface between ‘synth’ and ‘Tone.js’. I am trying to review the code to make it more modular. It will ensure that the Tone.js functions get used effectively. Also the plan is to make a ‘note’ object in order to replace the existing ‘getnote’ method.

I am looking forward to have a productive summer with Sugar Labs.


New Beginnings! was originally published in Outreachy Diaries on Medium, where people are continuing the conversation by highlighting and responding to this story.

Prachi Agrawal | Stories by Prachi Agrawal on Medium | 2017-06-07 18:55:18

Mozilla has organized a series of talks for summer interns. The first was by Selena Deckelmann, Director of Firefox Runtime.

These are some memorable points to think about:

  • Creating a circle of work advisers
  • Write down your dreams
  • Do the hard thing (find a hard problem to tackle)
  • Listen to criticism, sometimes

On her slide deck, the slide that shows periods leading to realizing dreams also coincides with periods of struggle. I have to wonder whether I am not struggling enough(?).


Carol Chung | CarroTech | 2017-06-07 04:04:24

Webcompat is a volunteer service that allows web developers and designers to report cross browser compatibility issues observed in their websites and applications. The issue will be triaged and a solution or browser fix will be returned.

webcompat.com screenshot

Webcompat.com browser compatibility reporting site

One of my tasks as Front End Dev intern for Mozilla Webcompat (via Outreachy) this summer is to help migrate the webcompat.com application to a new architecture and version.

These are a few tools that web developers/designers can take advantage of today:

  • Firefox and Chrome add-ons: After installing this add-on for your browser, you can conveniently report cross browser issues from your website with just one click. (see the screenshot upper left corner for link)
  • CSS Fix Me: This tool allows you to enter CSS that is not working across browsers and provides updated CSS with vendor prefixes for improved cross browser compatibility.
  • Media Reporting Tool: (available on Firefox Nightly version only) Automatically detects video playback issues related to browser and provides a one-click method to report the issue from your website.

So far, I am learning about code tidying processes that are required for the build process to run (on travis). These are a few resources for the processes we are using:

That is all for now. Look forward to meeting project members and other interns at the All Hands meeting this summer.


Carol Chung | CarroTech | 2017-06-06 00:07:20

My previous experience

During my studies and scientific work, I have not faced the task of creating a user-friendly and functional graphic interface. Homework is usually set to create an effective algorithm. And, of course, we do them primarily based on the basic requirements. We sent our solutions and pass the exam, and the written code is unlikely to be used ever - so the most I have wrote for my programs is the command line interface (getting arguments and processing flags).

In scientific work research and searching a new solution for the problem also predominates under creating useful interface. So when I faced with need to choose the best interface for the task I am solving, I went through this area. We сonsidered the following types:

  • Command line interfaces
    • Argparser
    • Curses
  • Graphic interfaces
    • PyQT
    • Web interface

With help of my mentor I have prepared a list of pros and cons of each type of interface regarding our task.

Command line interfaces

  • Argparser
import argparse

parser = argparse.ArgumentParser(
        description='This is an exaple argparser')
    parser.add_argument('dir',
                        metavar='directory',
                        type=str,
                        nargs=1,
                        help='Path to a directory with file(s)')
    parser.add_argument('filename',
                        metavar='filename(s)',
                        type=str,
                        nargs=+,
                        help='Filenames (space-separated)')
    parser.add_argument('-out', '--outfile',
                        type=str,
                        help='Filename to save the output (optional)')

I use it now, during the developement, for which it is quite convenient. But on the last stage of my internship I could provide useful ans user-friendly tools and interface, so I took a look on other types.

  • Curses Exaple of curses interface
    • (+) curses will be more convenient to run on a remote servers without X11
    • (+) it will be easier to implement
    • (-) perhaps it will be convenient only for advanced users
    • (-) you will need to use some graphical interface if you want to visualise the data, but probably this is not so critical

There are also alternatives for python curses, for example I found Urwid

Graphic interfaces

  • PyQT
    • (+) cross-platform
    • (+) easy to use
    • (-) need to install pyqt package
    • (-) may be slow over remote connections
  • Web interface
    • (+) more cross-platform as you can use it even with the tablet
    • (+) no need to install extra packages
    • (+) I could add interactive charts (I’ve used Highcharts, but there are many open-source libraries, for example D3: Data-Driven Documents
    • (-) it can be more difficult to implement, but there are many frameworks, for example I’ve tried the examples from Flask http://flask.pocoo.org
    • (-) a running web application must be accessed some way, typically over ssh forwarding, which is some hassle

Web interface seems to be more attractive to me, however, its implementation can take a long time. Since I would like to devote more time to developing an algorithm, we decided to choose the curses/urwid interface. At the same time I’ll leave the possibility to change the interface if necessary, so if there is enough time, I would try to create a web interface.

I hope my resume will help to get an initial idea of what interface is best for you, but do not forget that the choice depends on the task that you decide and the time available to you.

Anastasia Antsiferova | Anastasia Antsiferova's Blog | 2017-06-06 00:00:00

I was selected for Outreachy 2017 May-August round and joined to Lagom team

Yuliana Apaza | Yuliana Apaza - Blog | 2017-06-05 23:21:00

I’m very exciting when I got the email that I was accepted by Fedora in GSoC2017. I will work for the idea – Migrate Plinth to Fedora Server – this summer.

I attend my graduation thesis defense today, and I have to spend most of my time on my graduation project last week, so I only done a little bit of work for GSoC in the first week. I will officially start my work this week – migrate the first set of modules from Deb-based to RPM-based.

This is the rough plan I made with Mentor Tong:

First Phrase

  • Before June 5, Fedora wiki {Plinth (Migrate Plinth from Debian)}
  • June 6 ~ June 12, Coding: Finishing LDAP configuration First boot module
  • June 13 ~ June 20, Finish User register and admin manager
  • June 21 ~ June 26, Adjust Unit Test to adopt RPM and Fedora packages
  • Evaluation Phrase 1

Second Phrase

  • June 27 ~ July 8, Finish system config related models
  • July 9 ~ July 15, Finish all system models
  • July 16 ~ July 31, Finish one half APP models
  • Evaluation Phrase 2

Third Phrase

  • August 1 ~ August 13, Finish other app models
  • Final Test and finish wiki
  • Final Evaluation

Mandy Wang | English WoCa, WoGoo | 2017-06-05 15:58:16

Yesterday, I launched my website! Yayyy… after months of purchasing the domain name, here it is.. princiya.com

It is a Hugo powered website and hosted on Github. I started with Jekyll, but ended up using Hugo. Thanks to my Groovy on Grails knowledge, it helped me with setting up and understanding Hugo.

For those of you wondering what’s Jekyll or Hugo, here is an interesting article. In short, both Jekyll & Hugo are static website generators.

To start with, I am using the Kube theme for my website. I would continue to use this space (wordpress) for blogging. For now, I intend to use princiya.com to document my ‘Today I Learned’ series of articles.

Following are the (must) to-dos for my website:

  • SEO
  • Pagination for blog posts with next and previous links
  • Sort blog posts based on date
  • List all categories, tags etc for the posts
  • Logo, favicon
  • Google Analytics
  • Page to showcase my talks/presentations

I guess, I would figure out remaining things as I go. ‘Better late than never’, I now have my personal website 🙂

Happy Monday!


Princiya Marina Sequeira | P's Blog | 2017-06-05 11:31:52

Outreachy.Started

After six hours of looking for a way of adding if-none-match headers on UploadInput and MultiPartUpload Input, I had to record this here for my own reference but also to save someone else time in the future.I read the whole AWS Go SDK developer guide in vain in these hours.

Now there is an AWS Go SDK that we use to interface the Amazon web services. I was working with the s3 interface and needed to perform an UploadInput Input.

After rigorous and persistent search (with rumblings ofcorse) on the internet I got to find out that actually the AWS Go SDK S3 team needs to add this feature to their public API. It is a feature request that was reported like three weeks when i tried to access it. However below is a work around that saved my day.

func WithIfNoneMatch(conditions ...string) request.Option {
    return func(r *request.Request) {
       for _, v := range conditions {
            r.HTTPRequest.Header.Add("If-None-Match", v)
       }
    }
}
svc := s3.New(sess)

svc.PutObjectwithContext(ctx, &s3.PutObjectInput{
    Bucket: aws.String(myBucket),
    Key:      aws.String(myKey),
    Body:    reader,
    // Other parameters...
}, WithIfNoneMatch("etag")

Happy hacking, I am now on to connecting my tests to RGW on my summer outreachy project.

The world is against me today !!!

Nevertheless very positive.

Joannah Nanjekye | Joannah Nanjekye | 2017-06-05 00:00:00

Four months have passed since my last post.

In first weekend of March I took part in Biohack. Unfortunately, I had no chance to work on my own project, since only a few people wanted to work on it, and due to rules it wasn’t allowed to work on. My friends’ applications were rejected :( So, I chose another project - very routine and not-so-interesting-at-all and tried to do my best. Since it is hard not to sleep for 48 hours (I tried!) and to stay productive, I mostly worked from home to save time (approx. 1-1.5 hours from my home to the hackathon’s venue) for actual working. There was a simple but not very interesting project - we were to collect data from Protein Data Bank and to process it in a simple way. Anyway, my birthday was at Friday, March 3rd, I spent it at hackathon and I had no chance to be upset :) No time to think that people you wish would congratulate you don’t even care. We got some results, but I was exhausted. And because I worked from home, I didn’t see the final scripts - project’s idea mentor run it on his own laptop and didn’t committed it to project’s Github repository.

Next week I spontaneously decided to go to another hackathon. I was a little bit disappointed because of my project’s rejection and the whole BioHack thing, and my friend (another ex-BIOCAD worker) told me that his ex-collegue (with whom I had no chance to meet before because he was working there at the time when I was “to-emotional-to-work-with” or something) has a project idea but he has no volunteers to work on it. This hackathon wasn’t scientific, it was devoted to AI and business apps you can build with it. I thought if I can’t work on my idea, at least I could help him to work on his :) I convinced my husband to come with us. At the hackathon another girl (a physicist) joined our brave team. And we won 2nd place. We tried to use RNNs to generate molecules and built several simple models to predict their properties - it was a new field of study for me and for the whole team - no one had previous experience with chemoinformatics and RNNs.

There is an article (in Russian) about it, you can read it here. Unfortunately, the journalist who wrote it made several mistakes - in particular, made a small typo and wrote “Insilo Medicine” instead of “Insilico Medicine”. And it is embarrassing for us, since the article is presented as if we wrote it.

Many other things happened since March, I even became 1/4 of scientific advisor for MSc student!

And there is a thing I realized - being an misused serial intern in one particular company and one particular department and one particular boss - does hurt. I should never do it again. For now I’m not sure if I’ll ever manage to find a job to have a chance to avoid it.

Tatiana Malygina | Living under the orange moon | 2017-06-04 19:00:00

airplane, wing, skyen route from LA to Portland <3

It’s uncanny to revisit this blog and see that the date of this post is exactly one year later than this post describing the beginning of my 2016 Outreachy summer internship.

I feel like I’ve passed through multiple selves between then and now – even in the past 3-7 months, I feel like I’ve progressed through several new identities since I began working at DreamHost and since I began maneuvering through various unexpected life events. In the past 7 months, I’ve touched very little of the following:

(Those are all tools that make this blog hum.) I’ve done very little writing, period.

Hello again, world.

Indeed, this post is inspired (mandated :) by the generous conference stipends that are made available to Outreachy interns by the Software Freedom Conservancy. DreamHost also generously financed a large portion of my trip to Portland for Write the Docs NA .

Revisiting this Q&A post, I see again how documentation was so very key to making my first steps as a Homebrew contributor (Homebrew being yet another thing I’ve missed playing with over the past 7 months). Now that I’m back in touch with Mike and now that I’ve picked up inspiration and momentum from WTD, I’m excited to help further the current incarnation of the brew docs. I’m also thankful that I did document so much of my process last year – if only to remember: hey, I did these fun things.

FULL CIRCLE. :)

Write the Docs is hands down the best community I’ve joined in my short time working in software. I’m already excited to help out at WTD in Prague this September, and I’m doing my best to evangelize WTD to everyone at work, everyone I know who works in technology, and everyone I know who cares about open source. A few of us here in LA are also working on starting a local chapter to add to the WTD community. And, thanks to a solicitation by Nik Blanchet at the very end of the conference, I’m excited to help spin up a WTD Latin America. <3

It’s really hard to distill all my conference thoughts and feelings in a coherent way. I’ll let a few photos do some talking…

hike

  • On Sunday, I participated in Writing Day. I made baby steps towards contributing to Kubernetes documentation. Andrew and Jared at Google have been so very helpful and welcoming to new contributors.

Writing Day

Eric Holscher

  • Here’s Jodie Putrino talking about treating documentation workflow like coding workflow – with the aid of an awesome stock photo provided by F5. Tongue-in-cheek caption: The information you need is clearly provided in the documentation.

Jodie Putrino

Sam Faktorovich

  • Here’s Matthew Buttler talking about the partnership between documentarians and support … in a slide that speaks for itself. :)

Matthew Buttler

  • Here’s a sign that was posted by each of the restrooms in the beautiful Crystal Ballroom. It also speaks for itself. :)

Crystal Ballroom restrooms

None of this is about winning, of course, but – WTD wins at inclusivity, organization, thoughtfulness, and documenting process <3 when it comes to community and conference organizing. It’s why at least three of us who travelled to Portland from LA came away so inspired and motivated to form a local WTD chapter this year. We want to keep the conversations going in southern California and help spread the same openness of spirit, openness to dialogue, and openness to learning from one another.

(BTW, a full list of the conference speakers can be found here).

Andrea Kao | andrea kao | 2017-06-02 00:00:00

Yes, I agree, 18th May to 1st June, a huge time span. Well, I couldn’t make the last blog post so this will be summing up my work from then till today.
I have decided to write a translator for mediawiki.org and present that as the working example for the documentation. I went through the existing translator Wikipedia.js in order to understand how wiki sites need to be translated and what all data is included in the citation for these articles. Though MediaWiki pages won’t be similarly extracted, it provided a fair idea to me because I was confused about where to begin with translating these huge pages. For translation of search results, I went through Wikisource.js. Things are clearer now. Also, I looked into the user guide of VisualEditor (a quick read, as it is pretty easy to use anyway).The official internship period started on 30th May, 2017. Now I can say, I am an intern at Wikimedia, Woohoo!
For the first week, as planned, I have started working on the two sections of the documentation, brief about Citoid, Zotero and their relationship. I went through all the possible content available on Citoid and wrote a gist about it. Of course with a wiki page dedicated to it, it doesn’t make sense to elaborate it. I came across a research done on Citoid’s support and the TWL/Citoid page. Some issues reported there are resolved, some aren’t. It also made clear how important it is to widen the coverage of Zotero translators.
With at least 7 hours given to the project every day, I am looking forward to creating more translators alongside. I have been a little inactive since the last translator got merged. I have a few sites in line. Let’s see how this week ends.
Deadline: 5th June 2017

Summers 2017: This is what it looks like on heated afternoons.

WhatsApp Image 2017-06-01 at 7.41.05 PM(1)


Sonali Gupta | It's About Writing | 2017-06-01 17:23:59

So this week was the actual first one that I really coded and got to know the code structure.
It was challenging, I must admit: to work only with a guideline page (my mentors are extremely reactive but lives is a very different time zones) on a POC in very new tools, language and all.

I’m glad to write I did succeed eventually, not very fast but I did!

My next goal will be more complex so more consuming – build an API that will response to the UI element and serves it requests.

I guess the more I will work on this project the more I will be comfortable with it and understand the changes and additions that need to be done in order to complete to feature.


Ela Opper | FoxyBrown | 2017-06-01 14:47:14

Experience, Impact, Gratefulness, #latepost…

This blog is about my Outreachy winter internship at Wiki Education Foundation on the Dashboard Project with 2 amazing mentors Sage Ross(my development guru) and Jonathan Morgan(my design guru) and about how it helped me!

What I learned —

Various new technologies and designing techniques as I was refining my styling skills while developing the User Interface, you can read more about them in my previous blogs. I gained a lot of knowledge about open source development, GitHub, code reviews and pair programming.

How it helped me — Experience matters!

Won 1st spot in Grand Finale of Smart India Hackathon 2017 with the team of 6, under the department of Defence Production for the Project titled “Vehicle Detection and Localization in aerial images” (YAY!!)

I worked on developing the User Application, using Ruby on Rails for the back end processing and HTML, CSS for the Front end development. With each Judging round, Judges suggested few tweaks to improvise the User Interface and I was able to understand their suggestions and making changes didn’t take long.

Thanks to the experience I gained by working at Wiki Education Foundation on the Dashboard Project, I was able to give my best to the team during those 36 hours of hackathon.

Thanks Outreachy for giving us such a great opportunity, helping us reach out to Open Source Organizations and such amazing mentors, kick-starting with Open Source development, connecting all the extraordinary interns through the blogs we share at Planet Outreach and much more!!!

Thank you Outreachy Organizers for making all of this possible and I would like to share my OutreachyLove with all my fellow interns, my mentors and the organizers :)

What’s next —

I have been accepted for Google Summer of Code internship to work on improving the usability of Programs and Events Dashboard — a tool which assists the management of wiki programs and events. I am excited about the Project and also about the fact that I’ll get to work with Sage and Jonathan again! The internship has started and I’ll keep updating you guys about the work progress, stay tuned!

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-05-31 21:30:21