Scary Terry Celtics Sweater

Cavs are going to get a lot of home-officiating today. The NBA doesn’t want the series over too soon, and they want Lebron in the Finals. There is going to be a HUGE disparity in fouls (3 to 1, maybe) that are called.

As much as I want Boston to win, I know the chances of that are SLIM.

As a bonus prediction, Lebron gets to the line 14 times. Eehhhh. Everyone said the same thing about game 2…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-05-20 10:11:31

During this Google Summer of Code, I’m working on FHIR Analytic capabilities using Apache Spark. I’m researching on Bunsen which built on top Apache Spark to provide FHIR Analytic capabilities via a Java and Python API.

Using Bunsen, it currently provides the functionality to load the FHIR Bundles to Spark which allows users to use Spark SQL or underline Java/Python APIs to perform queries on loaded data.

After loading data to Spark using Bunsen, it maps FHIR Resource to Java Object structure using HAPI FHIR library. For example, if Observations loaded to system, user can user Spark SQL in following manner to query data.

spark.sql("""select subject.reference,effectiveDateTime,valueQuantity.value from observations where in_valueset(code, "heart_rate") limit 5 """).show()
+---------------+-----------------+-------+
| reference|effectiveDateTime| value|
+---------------+-----------------+-------+
|Patient/9995679| 2006-12-27|54.0000|
|Patient/9995679| 2007-04-18|60.0000|
+---------------+-----------------+-------+

Bunsen also provide rich Java API to perform FHIR analytics capabilities. Bunsen make FHIR analytics easier by using FHIREncoders. With encoders, user can use JAVA API in following manner to analyze FHIR data.

FhirEncoders encoders = FhirEncoders.forStu3().getOrCreate();

List<Condition> conditionList = // A list of org.hl7.fhir.dstu3.model.Condition objects.

Dataset<Condition> conditions = spark.createDataset(conditionList,
encoders.of(Condition.class));

// Query for conditions based on arbitrary Spark SQL expressions
Dataset<Condition> activeConditions = conditions
.where("clinicalStatus == 'active' and verificationStatus == 'confirmed'");

// Count the query results
long activeConditionCount = activeConditions.count();

// Convert the results back into a list of org.hl7.fhir.dstu3.model.Condition objects.
List<Condition> retrievedConditions = activeConditions.collectAsList();

Bunsen also allow users to load data via JSON or XML using spark map functions.

// Created as a static field to avoid creation costs on each invocation.
private static final FhirContext ctx = FhirContext.forDstu3();

// <snip>

FhirEncoders encoders = FhirEncoders.forStu3().getOrCreate();

Dataset<String> conditionJsons = // A Dataset of FHIR conditions in JSON form.

Dataset<Condition> conditions = conditionJsons.map(
(MapFunction<String,Condition>) conditionString -> {
return (Condition) ctx.newJsonParser().parseResource(conditionString);
},
encoders.of(Condition.class));

// Arbitrary queries or further transformations the the conditions Dataset goes here.

Currently I’m researching on integrating Bunsen with Cassandra via loading data from Cassandra database. Datastax provide Cassandra spark connector which allow users to load data directly from Cassandra database to spark models. Following is a sample a JAVA API which provided by the spark Cassandra Datastax driver to Spark structure.

JavaRDD<SampleBean> cassandraRdd = CassandraJavaUtil.javaFunctions(sc)
.cassandraTable("simple_ks", "simple_cf", mapColumnTo(SampleBean.class)).select("value");

I’m researching more on integrating Bunsen to load data from Cassandra. Also according to my research, Bunsen accepts FHIR bundles. But I’m looking for the capability to load data to Spark via Bunsen using FHIR resources it self.

It’s very interesting to learn about these technologies.

Kavindya Prashadi Bandara | Stories by Kavindya Peramune Rallage on Medium | 2018-05-16 12:58:54

Fortnite Nike V Neck Shirt For Ladies

My schedule has been crazy, but I’ve been watching on YouTube since I haven’t been able to make the FB streams. My schedule has been crazy, but I’ve been watching on YouTube since I haven’t been able to make the FB streams. Why do I️ always gotta stand so close to open chest and ammo anyone else have that prob I️ literally have to stand on top of I️t lol I️ have tap to hold on though. I’m gonna…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-05-14 13:08:51

This post is long overdue, but I have been so busy lately that I didn't have the time to sit down and write it in the past few weeks. What have I been busy with? Let's start with this event, that happened back in March:

Debian Women meeting in Curitiba (March 10th, 2018)

The eight women who attended the meeting gathered together in front of a tv with the Debian Women logo

At MiniDebConf Curitiba last year, few women attended. And, as I mentioned on a previous post, there was not even a single women speaking at MiniDebConf last year.

I didn't want MiniDebConf Curitiba 2018 to be a repeat of last year. Why? In part, because I have involved in other tech communities and I know it doesn't have to be like that (unless, of course, the community insists in being mysoginistic...).

So I came up with the idea of having a meeting for women in Curitiba one month before MiniDebConf. The main goal was to create a good enviroment for women to talk about Debian, whether they had used GNU/Linux before or not, whether they were programmers or not.

Miriam and Kira, two other women from the state of Parana interested in Debian, came along and helped out with planning. We used a collaborative pad to organize the tasks and activities and to create the text for the folder about Debian we had printed (based on Debian's documentation).

For the final version of the folder, it's important to acknowledge the help Luciana gave us, all the way from Minas Gerais. She collaborated with the translations, reviewed the texts and fixed the layout.

A pile with folded Debian Women folders. The writings are in Portuguese and it's possible to see a QR code.

The final odg file, in Portuguese, can be downloaded here: folder_debian_30cm.odg

Very quickly, because we had so little time (we settled on a date and a place a little over one month before the meeting), I created a web page and put it online the only way I could at that moment, using Github Pages. https://debianwomenbr.github.io

We used Mate Hackers' instance of nos.vc to register for the meeting, simply because we had to plan accordingly. This was the address for registration: https://encontros.matehackers.org/pt/projects/60-encontro-debian-women

Through the Training Center, a Brazilian tech community, we got to Lucio, who works at Pipefy and offered us the space so we could hold the meeting. Thank you, Lucio, Training Center and Pipefy!

Pipefy logo

Because Miriam and I weren't in Curitiba, we had to focus the promotion of this meeting online. Not the ideal when someone wants to be truly inclusive, but we worked with the resources we had. We reached out to TechLadies and invited them - as we did with many other groups.

This was our schedule:

Morning

09:00 - Welcome coffee

10:00 - What is Free Software? Copyright, licenses, sharing

10:30 - What is Debian?

12:00 - Lunch Break

Afternoon

14:30 - Internships with Debian - Outreachy and Google Summer of Code

15:00 - Install fest / helping with users issues

16:00 - Editing the Debian wiki to register this meeting https://wiki.debian.org/DebianWomen/History

17:30 - Wrap up

Take outs from the meeting:

  • Because we knew more or less how many people would attend, we were able to buy the food accordingly right before the meeting - and ended up spending much less than if we had ordered some kind of catering.

  • Sadly, it would be almost as expensive to print a dozen of folders than it would be to print out hundred of them. So we ended up printing 100 folders (which was expensive enough). The good part is that we would end up handing them out during MiniDebConf Curitiba.

  • We attempted a live stream of the meeting using Jitsi, but I don't think we were very successful, because we didn't have a microphone for the speakers.

  • Most of our public ended up being women who, in fact, already knew and/or used Debian, but weren't actively involved with the community.

  • It was during this meeting that the need for a mailing list in Portuguese for women interested in Debian came up. Because, yes, in a country where English is taught so poorly in the schools, the language can still be a barrier. We also wanted to keep in touch and share information about the Brazilian community and what we are doing. We want next years' DebConf to have a lot of women, specially Brazilian women who are interested and/or who are users and/or contribute to Debian. The request for this mailing list would be put through by Helen during MiniDebConf, using the bug report system. If you can, please support us: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=895575

Pictures from the meeting:

The breakfast table with food

Our breakfast table!

Miriam telling the women about Free Software, six women listening

Miriam's talk: What is Free Software? Copyright, licenses, sharing

Renata and Miriam talking about What is Debian a tv among them shows the title of the talk

Miriam and Renata's talk: What is Debian?

Renata talking about internships with Debian

Renata talking about internships with Debian

Thank you to all the women who participated!

The participants with the two men who helped with the meeting.

And to our lovely staff. Thank you, Lucio, for getting us the space and thank you, Pipefy!

This has been partly documented at Debian Wiki (DebianWomen/History) because the very next day after this meeting, Debian Wiki completely blocked ProtonVPN from even accessing the Wiki. Awesome. If anyone is able to, feel free to copy/paste any of this text there.

Renata D'Avila | Renata's blog | 2018-05-13 20:49:00

Fortnite Just Play It Long Sleeve T Shirt

Do the same and let Epic do the work. Holden Mahorney, isn’t that what happens whenever anyone dies.  I ain’t like the rest of these fan boys. I have a lot of wins in every mode. The thing that bothers me most is my bullets not going where my crosshairs are pointing due to the ugly bloom on here. I’m getting to the point where I would rather play COD or D2 or any other shooter for that matter.…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-05-12 14:27:43

Review Weight Lifting Gym Unicorn T Shirt

Calling out all Tar Heel fans be there be loud be proud! There is more than one great sport at UNC support them all! Go heels….just do you own damn work and take normal classes we can’t afford another scandal right now IJS. Geez I cant remember the last time we got ranked and then won the next game…somethin tells me that streak’s comin to an end Aug. 30 baby!! GO HEELS!!!!! Way to go Heels I seen…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-05-11 01:53:45

Some Geek Feminism folks will be at the following conferences and conventions in the United States over the next several weeks, in case contributors and readers would like to have some informal get-togethers to reminisce and chat about inheritors of the GF legacy:

If you’re interested, feel free to comment below, and to take on the step of initiating open space/programming/session organizing!

Geek Feminism | Geek Feminism Blog | 2018-05-09 18:25:32

Great Indian Developer Summit is India’s longest running, independent polyglot conference series for the software practitioner. It took place from 24-28th April this year in Banglore.

I attended the conference on 27th April. The track for the day was GIDS.DEVOPS & ARCHITECTURE, aimed at deepeing one’s knowledge on DevOps, Reactive Architecture Patterns, Getting Things Done (GTD), Evolutionary Architecture, Agile Design, Functional Design, Serverless, FaaS, Machine Learning, TensorFlow, TLS, Encryption, Docker DSL, Git, Gradle, Jenkins, Value Driven Development, Pipelines as a Code, Continuous Delivery, Containers, Microservices, and much more.

The day started at 8:20 AM with the welcome note as the main hall swell up with developers. The first session was by Mark Richards on “The Move towards Architectural Modularity”. Following it we had Siddharth Roy and Ashish Atre. I attended the session of Neal Ford on “Stories Every Developer Should Know”, “Serverless? Not so FasA!” by Matt Stine, “Supporting Constant Change” by Neal Ford, “Reactive Architecture Patters – Part 1” by March Richards, “The Architecture of Universal Design: All Devices, All Users” by Scott Davis and “Why Containers Will Take Over the World” by Elton Stoneman.

8:30 – 9:30
“The Move towards Architectural Modularity” – Mark Richards
The drivers of modularity are:
1. Agility
2. Testability
3. Deployability
4. Scalability
5. Avalaibility
Distributed modular architecture has 3 methods.
1. Microservices
2. Service-based
3. Event Driven
Modularity is a must, though not every portiion of producation has to be a microservice. Microservices need collaboration instead of communication and only when there are one or more drivers present should one make use of microservies.

10:45 – 11:45
“Stories Every Developer Should Know” – Neal Ford
One who doesn’t remember history
I am sharing some of the stories that he talked about in the session and the cause of the unfortunate event:
Debugging in production
Too little infrastructure
TOo much infrastructure
Don’t reuse when cleanup needed
Meta work > work

Most of the stories had one common mistake, reusing the code.

11:55 – 12:25
“Serverless? Not so FasA!” – Matt Stine

Lunch and Food

13:50 – 14:50
“Supporting Constant Change” – Neal Ford

15:00 – 16:30
“Reactive Architecture Patters – Part 1” – March Richards

16:10 – 17:10
“The Architecture of Universal Design: All Devices, All Users” – Scott Davis

17:30 – 18:30
“Why Containers Will Take Over the World” – Elton Stoneman

Sonali Gupta | It's About Writing | 2018-05-09 12:19:12

House Deadpool We Are Touching Our Selves Every Night Sweater

So not only do the erroneously think they can do the same at home, but they also think that’s all it takes for a restaurant to make burgers, devaluing what the professionals do for a living. Moreover he perpetuates a bad standard and terrible skills. Slamming his ingredients and his tools around like that he’s begging for that blade to go flying off and hit someone in the eye, or at the very…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-05-09 05:04:49

Thanos Salt Bae Tee

Our pride and culture exceeds anyone elses. Aztecs Mayans we are the busines to any Thanos Salt Bae shirt. Only a Mexican can make something out of nothing lol!!! Were built that way and if your born and raised in the states and dont have family in Mexico or visit you are not Mexican. LoL if you speak spanish in a funny way where your Mexican latives look at you like wtf you are not Mexican your…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-05-08 13:25:58

Anjali is a curious young girl who lives in a small village in Bihar, India. She got a scholarship from Pokhrama Foundation to attend one of the best school in that area. This NGO has their own process for selection. You can get an idea about the process on the website. Why/How/When I got into mentoring, that is another story! I want to write here about this young girl.

What motivated me to write this blog post is the curiosity, enthusiasm of this young lady. She is in 6th standard. Believe me, she motivates me! I am trying to mentor her remotely as we both live in different cities. I talk to her once/twice in a week. I try to solve her problem, She is introvert. It’s was really hard for her to get open to me and share her problem. When I look back into my life- It was really hard for me too. I was an introvert till my college. I am glad she is open to me now. This is a learning process for me also. I will list down few of the things I am learning:

  • She is a morning person, She studies in morning. I am in process of becoming one.
  • She asks questions, I mean literally lots of questions- Sometimes I don’t have an answer to her question, here I learn. I realized I don’t know lots of basic things. I admit whenever I don’t know things. I read about those things and get back to her(see I am still close to academia. This makes me feel good).

the-important-thing-is-not-to-stop-questioning

  • I use to ask a lot of question while I was a student and I tell her to do same. Now I am a working woman, but I still ask- This helps me in learning lot of things. For example- One colleague was working on some visualization- and out of curiosity, I asked her how she did that- That day I learned about a new library that we can use for visualization.
  • Keep learning- She makes me realize that I don’t know few things that probably everyone should know.
  • A rough timetable of the day is important.
  • Deciding priorities- Hard but important. An advice from Prof. Philip and Prof. Vijay.

Screenshot from 2018-05-07 22-06-29

I will add more points to this post later.  I am missing lot of points.

Rakhi Sharma | aka_atbrakhi | 2018-05-07 17:00:28

Porn kills Love Tee

These teams are coached to play this way, that’s how they get their stats. That’s why Harden got Dantoni as his coach and is going nuts. You know who else won MVP under Dantoni? Steve Nash. Well, is it me or does Harden actually look like he’s got a little Nash in his passing game, now? Coaching doesn’t matter. It’s players only out there when Dan’s watching. No Ben. Brad’s a good coach. But…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-05-07 05:11:40

Thanos Infinity Gauntlet Sweater

Titan got it’s chance. Earth should too.

YES HE IS RIGHT, GENERALLY SPEAKING HUMANS ARE AS THICK AS Thanos Infinity Gauntlet Shirt (deny it all you like, look through world headlines even in the past twenty years and you will find we are pretty dumb as a species)
Yes wipe half the planet out. Meh. His idea isn’t new. The antagonist of the book Inferno also had the same plan. In fact, his idea was…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-05-05 15:46:29

The beginning is always hard. I also went through the difficult procedure of getting started with my own blog. So I am taking this opportunity to try to make it a bit easy for others in the same boat.

If the title didn’t already explain, this article is a 101 for setting up your own blog, right from coding to hosting and serving with a domain name.

I’m assuming here that:

  • You know the basics of web development i.e. programming with HTML, CSS, and javascript. (Not necessarily to follow this article but for maintaining your blog, you will need to know them.)
  • You use GitHub or any other version control system, not necessarily be knowing what version control is(just in case). If you didn’t use GitHub before, then please first go through this beginner’s guide and get familiar enough to be able to set up a repository and maintain it.
  • You understand templating in web development and of course, understand OOPS.

Note: I use ubuntu 16.04.

There are two things to keep in mind,

  • The process is easy, only when you follow smartly.
  • Google Is Your Friend if you get stuck somewhere.

Note: I will only describe the bare minimum for setup stuff, for the rest, there are official docs to follow. Nothing works better than them, Believe me.

Let’s begin!


1. Setting up GitHub Pages!

Before getting into the development of the blog, we are going to set up the hosting first as it isn’t going to require much effort. We are going to leverage the free hosting provided by GitHub in the name GitHub Pages.

It is as easy as creating a repository. Follow these steps:

  • Create a new repository with the name <username>.github.io, where username is your GitHub username like mine is “curioswati”.
  • Clone the repository to your local filesystem with:

      git clone https://github.com/username/username.github.io
    

(Don’t forget to replace username with your GitHub username.)


2. Setup your custom domain!(Optional)

Buy a domain name from some Domain name provider(the how-to is out of the scope of this post). Follow this GitHub guide for setting up your custom domain with GitHub. The procedure in short is,

  • Create a CNAME(the name of the file is CNAME, no extensions) file in the root of the repository. It should contain your domain name, as mine contains: “swatij.me”. You can create CNAME from GitHub’s Web UI by following the steps:

    • Go to your repository <username.github.io>.
    • Go to Settings.

      Repository Settings page

    • Under “Custom Domain”, write your domain’s name and click “Save”.

      Custom Domain

      After this, you will see a message saying that “CNAME was created”.

  • Create 2 records in the DNS by following along with this post. The DNS changes can take a full day to reflect at max. So you would have to wait for a day at max to see your site running on your <domain.com>. Till then, you can see it at <username.github.io>. Which will also take at least 10 minutes after creating the repository to go live(before that, it will show a 404 on the page).
    Later, when DNS changes are up, your <username.github.io> will also redirect to your <domain.com>.

So, the extra setup is done, we have our blog hosted on GitHub and served at a <domain.com>. But right now it’s blank as we have nothing in our repository to be shown. We need to convert it to a blog now. So let’s move on to the next and the most important step: “Development”.


3. Introducing Jekyll

Jekyll is a static site generator. Yes, it generates “static” sites. That’s why you can’t use it for full-fledged websites with many catching features. But that’s what it is not intended for. The introduction of its site includes the word “blog aware”, which itself tells the story. I for myself never regretted using it in past 4 years. It takes some extra efforts to get something new in, but that’s worth the effort. Because the selling point is, “Your content is yours forever”. It uses Liquid(templating language) to render the content which can be written in any of HTML or Markdown.


4. Installation and Configuration for Jekyll

I. Installation

You have 2 options:

  • Either download this gist, extract and run the python script.
  • Or follow the documentation.

The gist mentioned above does the following:

  • Checks for the operating system version (the commands will work with Debian based Linux system only).
  • If the install is instructed, runs the following commands:

    Install build dependencies

      sudo apt-get install gcc g++ make software-properties-common python-software-properties
    

    Add PPA for ruby.

      sudo add-apt-repository ppa:brightbox/ruby-ng
    

    update cache.

      sudo apt-get update
    

    Install ruby 2.2

      sudo apt-get install ruby2.2 ruby2.2-dev
    

    Install latest Jekyll(3.7.3 at the time of publishing.)

      sudo gem install jekyll
    
  • If uninstall is instructed, runs the following command.

      sudo apt-get remove ruby* ruby*-dev rubygems
    

So, you can manually use these commands to install Jekyll with ruby or can use the gist or docs, whatever you prefer.
Check if Jekyll was installed by the following command:

jekyll --version

It should show you jekyll x.x.x.

here are some troubleshooting tips if you need them. If you run into some new kind of issue, do consider reporting it here.


II. Configuration

First, lets’ create our blog!!

  • Change the directory to your newly created repository and type the following command from inside.

      jekyll new .
    
  • You have two options from here on, I am not getting into details and just guiding you through the easier one. Remove the file named Gemfile and then start the development server.

      jekyll serve
    

Now let’s see it running, navigate to http://localhost:4000 from your browser. You can see an introduction page.

The other option that I didn’t mention was to use the gem based theme, which will require you to install bundler. You can find out more here and the basic usage here.


5. Let’s talk Jekyll

We’ll cover some concepts here so you can have a head start. Later on, you can always go to the docs for details.

But before that, create some directories for important files that I’m going to mention below.
Your directory structure should look like:

username.github.io/
    |- _posts/
    |- _layouts/
    |- _includes/
    |- _config.yml
    |- index.html
    |- static/


The _config.yml

This file is the communication link between you and Jekyll. You will see when you open it. It deals with everything that you will ever use with Jekyll. For the starters, fill in your relevant details. These are site variables which will be accessible elsewhere with site.<variable_name>. So you can use this file for site-wide configurations.

Mine looks like:

title: My FullName
email: my_email@domain.com
description: > # this means to ignore newlines until "baseurl:"
  Site Description.
baseurl: "" # the subpath of your site, e.g. /blog/
url: "http://username.github.io" # the base hostname & protocol for your site
twitter_username: my twitter_handle
github_username:  my github_username


The Front Matter

It is a very cool feature of Jekyll. It is the content enclosed between --- at the beginning of any file. You can specify the front matter by adding the following at the beginning of your post page.

---
layout: default
title:  Title for the page
date:   YYYY-MM-DD HH:MM:SS
tags: ['tagA', 'tagB', 'tagC']
categories:  category subcategory
permalink: /:categories/:title
---

The --- are very important!!
The content inside is self-explanatory. The “categories” is a very useful feature. Categories are used to classify and organize your posts in directories in your repository by Jekyll. From the above front matter, jekyll will create a directory hierarchy like /_site/category/subcategory/post.html. Everything is well organized already.

Then you can see “permalink” here, it will automatically take the category names, separate them with / and prepend them to the title and form a permalink for the post. So your post will have a link similar to http://username.github.io/category/subcategory/title.

Tags” are what they are elsewhere. You can organize your posts according to tags by mentioning them like this in the posts and then creating lists where you can iterate on tags with post.tags.

Layouts” deserves a section, so follow along.


The _Layouts

Layout”, is similar to templates in frameworks’ ecosystem. They are good for reusability as you don’t have to add same code blocks(take head, header and footer etc for example) in each HTML file, instead, they are plugged into all of the HTML files that inherit a layout which has them.

The default layout for our blog will look something like this:

Create a file named default.html inside the _layouts/ directory with the following content.

<!DOCTYPE html>
<html>

  { % include head.html %}

  <body>

    <div>
      { % include header.html %}
    </div>
    
    <div class="page-content" style="margin-bottom: 15%; margin-top: 7%;">
      <div class="wrapper">
        { { content }}
      </div>
    </div>

    <div>
    { % include footer.html %}
    </div>

    <script src="/static/js/jquery-1.11.2.min.js" type="text/javascript"></script>
    <script src="/static/js/bootstrap.min.js"></script>

  </body>

</html>

The extra space here "{ %" and here "{ {" is intentional to avoid having them interpreted by the engine.

You can create many other layouts while extending this default one.

for example, here is a layout for post detail page:

---
layout: default
---
<div class="post">

  <header class="post-header">
    <h1 class="post-title">{ { page.title }}</h1>
  </header>

  <article class="post-content">
    { { content }}
  </article>

</div>

Here, there is again a { { content }} block, which will be filled by a post that is using this layout.
So, now you might be getting the picture. The first layout had a { { content }} block which shall be filled in by this page template and then there will be a “post.html” which will fill in the { { content }} block.
This is how “inheritance” is leveraged here.


The _includes

In the above layouts, you saw some _includes blocks.
Those include blocks are different HTML modules kept inside _includes/ directory. So whenever you want to attach a block of code in multiple places, you put it inside an HTML file in the _includes/ directory and then can use it with { % include filename %} wherever you need it.

So, as you can see in the default layout that we wrote above, it includes footer.html, head.html, and header.html. So you need to create those files and put them inside _includes/. I have used it also for “google analytics” script, “reading time”, “comments” etc on my own blog.

The head.html should contain the head block of your HTML page i.e. it should have title block, meta tags, and css imports etc. The header.html could contain the site header with navigation that should list various other pages of your blog.

The footer.html can have links to your other profiles and the description for your site.

One more important feature that I haven’t talked about is “collections”. Just briefing it here. I’ll wrap up Jekyll with it.


The collections

Collections are useful when you have to show some data that has many items with similar properties. The best example for me to put it right now is GitHub Projects. So, for example, you want to showcase your GitHub projects on your blog/personal website. The data that you want to show might follow a pattern here, for example, the data can have some common fields among all items(projects) i.e. project name, project URL, some description, an image etc.
So one way to show them on the site is creating an HTML page and filling in all the details one by one by repeating the blocks with all the content.
The other, but better way is to use “collections”.

Let me show how to do this.

  • Create a directory named _projects/ inside the root of the blog directory. You can keep any name that you want to give to your collection. Just keep that _ at the beginning.

  • In your _config.yml file, add a block for collections like this:

title: Full Name
... # other fields
... # other fields
... # other fields
collections:
  - projects
  • create .md files for each project with the details filled in. For example, project1.md could look something like this:
---
name: "Project 1"
repo: "https://github.com/username/project1"
gh-page: "/project1"
liveurl: "https://project1.com/"
---
    

and so on for all the projects.

  • Create the page on which you want to show the projects, say projects.html. In that file, you can iterate over your collection of projects. Like this:
{ % for project in site.projects %}
    <span>{ { project.name }}</span>
    <span>{ { project.repo }}</span>
    <span>{ { project.gh-page }}</span>
    <a href="{ { project.liveurl }}">live-link</span>
{ % endfor %}
    

This way, maintaining that page becomes easier, all you have to update is those .md files, whenever the data changes. Refer collections in Jekyll’s docs for more.

I have been using liquid tags throughout the article, If you want to learn about them(which you have to), please follow Jekyll’s template guide.

Then we have variables.

I have just scratched the surface, jekyll has many cool features that will amaze you. You can refer the docs for all of them. There are some more useful things here.


6. Writing Posts

So, all the background scratched, now we move on to action. Let’s first fill in the index page to list our posts when we create them.

Put this inside the index.md or index.html whichever you have:

---
layout: default
---

<div class="home">

  <h1 class="page-heading">Posts</h1>

  <ul class="post-list">
    { % for post in site.posts %}
      <li>
        <a class="post-link" href="{ { post.url | prepend: site.baseurl }}"></a>
      </li>
    { % endfor %}
  </ul>

</div>

Inside the _posts/ directory, create a file with whatever title you want to give to your post. For example, first-post.md or first-post.html.

Put the front matter at the beginning. It could look like:

---
title:  "First Post with Jekyll"
date:   2018-03-23 12:30:00
categories: category1 subcategory1
permalink: /:categories/:title
---

Below the front matter, write your posts in Markdown or HTML whichever you prefer.
You can then see the post listed on the homepage. It will have the permalink as http://username.github.io/category1/category2/first-post

If you followed along, the final directory structure should look like:

username.github.io/
    |- _posts/
        |- first-post.md

    |- _layouts/
        |- default.html

    |- _includes/
        |- head.html
        |- header.html
        |- footer.html

    |- _projects/
        |- project1.md
        |- project2.md

    |- _config.yml
    |- index.html
    |- projects.html
    |- static/

For keeping your resources, like css and js scripts and fonts, you can put them inside the static folder and use relative links like static/css/bootstrap.css for linking them in HTML.

So this is it for jekyll. As I said, I have just mentioned the bare minimum for you to get started. You can find other features and play with them while moving on with your blog.
You can find the code for my own blog here.


7. Deployment

In case you are not using “github-pages” and want to deploy your blog elsewhere. Jekyll has it all covered too. you can visit the Deployment Page for all the details.

So, we are done with the initial blog setup, you can check out the references for more.

I’ll be writing about other features that I gradually added to my blog, in upcoming posts.
Stay tuned!


Originally published on: [https://www.zeolearn.com/magazine/github-pages-with-jekyll-scratch-up-your-own-blog]


References

Swati Jaiswal | Swati Jaiswal | 2018-05-04 17:35:00

Review I Am A Marvelaholic Sweater

Focus on the good and let them figure the rest out as a family.  Congrats Khloe it is true that blessing you recieved will truly change your life…..i have loved watching I Am A Marvelaholic Shirtas a big sister to Rob Kendall and Kylie and a handson wonderful aunt to all your nieces and nephews can’t wait to see your new role as mommy  ❤ xoxo prayers for you of peace and strength…..only take in…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-05-02 14:20:52

Nurses Are Like Pineapples Tough On The Outside Sweet And Will Stick Tank Top

If she was white, then they’d be accusing Trump and her of being “white supremacists”. It doesn’t matter who she is, if she’s picked by Trump then the “Bitching Party” will do what they do best… bitch and whine. Trump could have selected Nurses Are Like Pineapples Tough On The Outside Sweet And Will Stick Shirt and the lib Dems would whine.
That’s all they do, whine and bitch. No solutions to…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-04-29 09:23:25

Shut The Fuck Up Shirt

We true Americans need to resist this at every cost and to help elect people who put America first not illegal foreigners and criminals first .  I lived i Granite City as a child. I remember when the Steele mill was producing big time. Then they used it to store Party Like Frank Fight Like Fiona Be A Genius Like Lip Shirt. Glad to hear it’s going to be back in business. MAGA. With all the usual…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-04-28 12:46:17

Hello!

I come across the title of this blog post quite often. In my job, I deal with Django projects (the error is no way related to Django), there has been roughly a 4/10 chance of coming across this error (because we use watchdog). Every time it happened, I looked up stackoverflow and increased the watch limit by some random number as suggested by the solutions. This time, I realized it deserves more attention since it happens so frequently.

On looking up, I found out watchdog python package uses native APIs as much as possible, which is why it relies on Linux kernel’s API – inotify for doing its job on Linux distros. About inotify from man page,

The inotify API provides a mechanism for monitoring file system events. inotify can be used to monitor individual files, or to monitor directories. When a directory is monitored, inotify will return events for the directory itself, and for files inside the directory.

inotify works in the following three steps:

  • initialize
  • add watch
  • remove watch

thats-it-5ae334.jpg

Yes. Absolutely. Doesn’t the process seem legible? Let’s understand the terminology though:

watch specifies the pathname of a file or directory, along with some set of events that the kernel should monitor for the file referred to by that pathname.
A list of watches, called a watch list
you-had-guessed
is maintained and manipulated (addition or removal of watches) as per addition or removal of paths.
But, wait. Unless its an entirely new project, I do not really add or delete so many files that this error should pop up. Then what happens?
There’s a good reason to it.

Only when the underlying object and its resources are freed for reuse by the kernel; all associated watches are automatically freed.

Which is quite intelligible since while using one file, the resources which are being required and used by that file might not be released by the time you have added another watch unknowingly.

What was the need of something like this?

  • To efficiently monitor filesystem objects.
  • To auto-compile a file/project as and when a change is made.
  • To restart/reload services depending on particular files.

Maybe more. Write down in comments if you use it for something more awesome.
From the man page,

With careful programming, an application can use inotify to efficiently monitor and cache the state of a set of filesystem objects.

When you see an error like:

File "/home/unixia/projects//venv/lib/python3.5/site-packages/watchdog/observers/inotify_c.py", line 402, in _raise_error raise OSError("inotify watch limit reached") OSError: inotify watch limit reached

It is because inotify has consumed all the kernel memory it was by default allowed to.

In order to limit the amount of kernel memory consumed by inotify, we could use any of the following /proc interfaces:

  • /proc/sys/fs/inotify/max_queued_events
  • /proc/sys/fs/inotify/max_user_instances
  • /proc/sys/fs/inotify/max_user_watches

A way to deal with this error by making use of the /proc interfaces above:

Temporarily,
sudo sysctl fs.inotify.max_user_watches=<preferred value>

Permanently,
As per your distro, into your sysctl settings, fs.inotify.max_user_watches=<preferred value>
and then, reload ​​sysctl.

Now, how do you decide on ​​preferred value​?

Check the maximum number of watches,
cat /proc/sys/fs/inotify/max_user_watches

Keep your preferred value anything above this number.

Wait, but what if I accidentally write a very big number?

No problem. Remember the kernel memory is used only when the watch is being used. But of course if you end up eating all the memory because of exorbitantly large number of watches then you might be in trouble although the chances of that happening are quite low.

Thanks for reading.

Shivani Bhardwaj | Imbibe Linux | 2018-04-28 10:03:11

27th April, 2018 and here I am at GIDS, fortunate enough to attend such a marvelous conference before the beginning of my career as a software developer. I am thankful to my Outreachy mentor Marielle Volz, Wikimedia nad Software Freedom Conservancyfor providing me the chance to visit Bangalore and attend the 11th edition of this conference.

WhatsApp Image 2018-04-27 at 2.42.42 PM

Salthmarch along with other sponsers has organized this 5 day extragavanza. I will be blogging in detail about all the talks that I will attend today and my learning, but what could be better to blog live.
In the beautiful location of J. N. Tata Auditorium, we have the speakers who are best in their domains and the delegates who are of all ages. Data scientists, architects, backend developers, designers, etc.

The opening talk was by Mark Richards, about architectual modularity, its benefits and the trade offs. Given that it was a keynote, he took the opportunity to give an overview about the need for modularity and microservices. He would be conducting a hands on session tomorrow for putting everything he delieverd in a pratical way. Next, there were 3 15 minutes keynote sessions and my favourite out of them (or the one I understood the most) was Congnitive Serverless Architecture.

Mark Richrds after the session 1

WhatsApp Image 2018-04-27 at 10.51.07 AM.jpeg
After some hot tea, I was lucky enough to attend the session of Neal Ford. He covered stories of projects that failed, what and why happened. I couldn’t get enough of him and I am glad there is a keynote session of him after lunch. He kept holding the audience for the entire session and there was good learning.

Neal Ford’s talk

WhatsApp Image 2018-04-27 at 10.52.40 AM
Next, I am attending “Serverless? Not so Faas!” by Matt Stine. I wanted to know what serverless architecture is and how it is better than cloud native architecture. The talk has demos and interesting content, but for me, given I have no experience with Azure, I am not finding it asy to understand. There is maven, containers, Spring, Azure and many more tools in action. But the idea of Faas, Function-as-a-service is new to me and hence I am glad I attended this session.

**************Lunch Break**************
For lunch and other meals, I must comment that the quality was pleasant and everything here is well management. The tactic for handling such a crowd it to have multiple table toh handle same food. I also visited the __ of Sapient and IBM. Parual Bansal, an employee of Sapient showed us her work on virtual reality. She painted objects on a page and made them move through a camera and language processing. She even gave me a card and hand wrote all the tools she used, and also her email id. At the IBM__, I got to know about IBM cloud, providing around 140 services for free and the attendant also told me about the developer platform by IBm where they take up challenges, project, tech talks and webinars.
**************During talk**************
Currently I am attending the session of Neal Ford on embracing change because technology will always be changing. For this there is evolutionary architecture and Fitness function. Next will be Mark Richards’ talk on reactive architectural pattern.

Do find time to read the detailed blog that I’ll writing later. Well, this has been a happening conference, with great management, wonderful speakers and diverse crowd. Hats off to the volunteers. I am  looking forward to attend this conference in future, more than once, and I am glad such conference takes place in India. Thank you Saltmarch

Sonali Gupta | It's About Writing | 2018-04-27 09:24:20



Smashing Note SINCURR Shirt

Cats aren’t by nature, it’s not something you can blame them for. At the same time though, thinking they’re anywhere near better than dogs is a flat out fallacy.  

Julia Lima | Thu thm t uy tn | 2018-04-27 04:08:11

Easily distracted by horses shirt

I’m sure you ladies aren’t perfect don’t go “marriage before children” crap marriage is not a few sure thing kail was married and javi treated her like crap he blamed her for the Easily distracted by horses shirt so she said she didn’t want anymore kids with him (I don’t blame her) their marriage was over long before their divorce. You only see what MTV wants you to see those girls are fantastic…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-04-26 06:33:44

We recently organized Conf & Coffee 2018 in Vancouver, BC, and one of the bigger tasks we had to take care of was designing and printing our conference badges. A conference is all about people - people who come there to learn, meet new people and have a good time. We wanted to make it easier for them and create useable badges.

Name badges are often bulky and awkward, but they serve a purpose. A well-designed badge can help start a conversation - and get back to it, even if you suddenly forget someone’s name. We also wanted the attendees to be able to easily find speakers and organizers in the crowd. It also helped with initial registration, giving people a reason to come to the registration desk and chat with us.

We had a few requirements in mind before printing the badges:

  • making sure the name is clearly visible and readable at a distance

  • making the prints sturdy enough to be usable for a 2-day conference

  • color-coding them for easy recognition of attendees, speakers and organizers

  • having an adjustable lanyard, so that people of all heights can feel comfortable wearing them

  • encouraging everyone to use their badges as they please - add interests, stickers, preferred pronouns etc.

Design and prints

We had a wonderful initial design for the badge created by Alanna Munro and worked from there by creating two more color options - we ended up using the original purple color scheme for speakers, adding a peach color for attendees and a green one for organizers.

We then took the SVG files and prepared them for printing, working closely with a wonderful local print shop. They helped us with:

  • creating an image in Adobe Illustrator - the original design by Alanna was created in Illustrator, but since the person mostly working with the designs had more experince with Sketch, we decided to use it for brainstorming

  • adding bleed - we had a sense that we’re going to need bleed, but initially added it in the SVG projects rather than using the Illustrator tooling for adding it, it turns out it takes about 20 seconds to add if you know how

  • making sure everything aligns just right, especially the names and the speaker talk information - a few people had longer names and non-English characters in them, some of our speakers had two talks during the conference

  • adjusting the colors for printing - we had a color palette defined in HEX that looked great on a screen, but needed a splash of cyan to make it more vivid in print

  • choosing the right paper - uncoated heavy white cardstock (130lb), as we wanted people to write on them

  • cutting the prints and drilling holes - it might seem obvious, but we were initially a bit worried we might need to cut and drill the holes on our own, turns out it’s something a print shop will gladly do

We got the initial prints and stress-tested them throroughly - writing on them, putting them under pouring water, pulling at the lanyard etc. - and they were perfect for our needs. We were very lucky with the print shop service, as they were responsive and quick to adjust the order to our needs.

Lanyards

We decided to use twine for our adjustable lanyards. Initially, we wanted to use regular packaging twine, but it was too brittle, so we used white cotton twine instead - one spool was enough for all our badges.

To attach the lanyards we needed:

  • big enough holes in badges on each side (0.125in) - this also allowed the badges to not turn too much, if properly adjusted

  • two pieces of twine (one for each side)

  • a sliding knot to connect the two pieces - we used a fisherman’s knot

It sounds simple, but keep in mind we had about 130 badges with pre-printed names and a bunch of blanks in case of last-minute cancellations/swaps. It took two people and two episodes of Westworld - I don’t think you could do it with one person and four episodes, because without my partner helping me I’d probably call all the other organizers for an emergency twine party.

Lessons learned

  1. Design - it’s not easy

    If you are not a designer, don’t assume you can make the badges look good on your own. Our badges looked great, but most of it was thanks to a simple and adjustable design that we could then reuse. The back side of the badges had a simple table for names and Twitter handles, because we didn’t have a clue as to what else we could put there that would look good and be usable (or, at least, not look terribly out of place). Make sure to work with a designer.

  2. Timing the prints - you need last-minute adjustments

    We decided to print the badges after finalizing the ticket sales, but we didn’t expect too many swaps. That turned out to be an overly optimistic assumption, as many people suddenly remembered about the conference a few days before it and were asking about last minute changes. We ended up ordering additional blank badges, so that people could write their own names on them.

  3. Informing attendees at registration - especially on adjustable lanyards

    Yes, we had the adjustable lanyards - but we didn’t do a great job explaining it at the registration desk and some of our attendees were initially very uncomfortable. We could also encourage people more to draw on them and use them, as, even though we had a prop just for that, not many people felt encouraged to do so. The back of the badge with a table to put names and Twitter handles in was not used by most of the attendees, so maybe we could figure out a way to better use it in the future.

  4. Adjustable lanyards are appreciated - but we could improve the process of attaching them

    We heard positive feedback about the lanyards, especially from attendees that usually have some issues with awkward placement of conference badges, which was great and we’ll definitely make them again. One thing we could improve is making sure more people could attach them and the process is less time-consuming.

  5. Conference logo - you ain’t gonna need it

    We didn’t use the conference logo on the badges, as our design looked great without it. Our attendees knew what conference they are attending and we couldn’t think of a way of putting our logo on the badge without overshadowing the more important bids.

All about the people

A conference takes a lot of work and, although most of the tasks are simple and repeatable, it gets surprisingly hard and tiresome. It was a great experience, especially since I got to work with a wonderful group of people.

Thank you Brooke, Rose, Manil, Darryl, Nichole and Steve for welcoming me as a co-organizer!

Thank you Gavin, Paulina, Robert, Wendy, Andrea, Bernadette, Stephen and Daruvin for volunteering and making this conference happen!

Alicja Raszkowska | Alicja Raszkowska | 2018-04-22 16:13:00

This week I started out as an Open Source Advocate at Zalando in the shiny new Open Source team with Paul & Per. Together, we are the 3 P’s!

The team is meant to develop Zalando’s Open Source strategy and consult the wider organisation on how to implement it safely, measurably and with impact in mind.

 

Screen Shot 2018-04-20 at 11.24.18
Twitter post [https://twitter.com/therealpadams/status/985917890732920833]
The week has been exciting and fun; we have been setting up our initial backlog of upcoming tasks, drawing up our team / open source strategy and off-course, getting to know each other!

Learnings 

  • I learned about KDE! The first day, I disappointed my Manager by asking him what that acronym meant 😉
  • I am curious to dive deeper into ‘community metrics in open source‘. There are lot of tools, articles out there and there is certainly a lot to learn on this space.

Announcements

My last blog post was about wrangling in the OPEN space for MozFest. Considering I will be doing lot of things related to OPEN (source) world, here is my new column on this space:

OPENWORLD

Screen Shot 2018-04-21 at 14.21.22.png
Link to OPENWORLD

Looking forward to lots of learning in this OPEN world 🙂

Princiya Marina Sequeira | P's Blog | 2018-04-21 12:22:46

Version 2.7 of the Commons app has just been released for beta testing on the Google Play Store! \o/ Please feel free to register for beta testing if you would like to help test the new features. If you experience any bugs or crashes, or if you find any aspect of the new features to be unwieldy, please do let us know on GitHub.

New features

New “Nearby places that need pictures” UI with direct uploads (and associated category suggestions)

You can now upload a photo directly from the map or list of nearby places that need pictures! Below are screenshots of my workflow for uploading a photo of “Queen Street Mall”:

  1. Go to the map of Nearby Places and select the corresponding pin from there, or the corresponding item from the list
  2. Tap the camera or gallery button, and select or take an image as usual
  3. The title and description of the image is automatically pre-filled in the next screen, but you can edit it if you wish
  4. If that item has a Commons category associated with it, that category will be on the top of the list of suggested categories

 

Enabled two-factor authentication login

Power users with two-factor authentication enabled for their account can now log in to those accounts via our app.

Added Notifications activity to display Commons user talk messages

Any user talk messages that you received can now be viewed via the “Notifications” screen (can be accessed via the navigation drawer).

Added real-time location tracking in Nearby

The map of Nearby places should now track your real-time position, moving your marker on the map as you move.

Improvements to UI of navigation drawer, tutorial, media details view, login activity and Settings

Added option to nominate picture for deletion in media details view

You can now nominate (your own) pictures for deletion in the media details view.

Also, too many crashes and bug fixes to mention!

Various crashes and bugs that are commonly encountered should have been fixed in the latest version. If you encounter any further crashes, please send feedback to us in the popup dialog.

Join our community

We have had a record number of contributors to our GitHub repository last month! 🙂 If you are interested in joining a diverse community of volunteers and grantees to work on the Commons app, check us out on GitHub. Both technical and non-technical contributors are greatly appreciated – non-technical contributions may involve testing, translations, patrolling, or writing documentation. More information for new contributors can be found in our GitHub wiki.

Coming up next….

  • Wikidata edits – uploading an image via Nearby will add it to the p18 property of the associated Wikidata item and remove that item from Nearby
  • A showcase of featured images
  • Multiple uploads
  • New UI for the main screen

A huge thank you to everyone who has supported us thus far! ❤

Josephine Lim | cookies & code | 2018-04-20 17:00:09

Last week I joined a cohort of 43 people from 12 countries in the beautiful city of Eindhoven, Netherlands for MozRetreat! We were a team of friends, partners, network members, Mozilla staff and first-time collaborators to become festival wranglers and shape the design of the MozFest program.

I was invited because of my continued involvement with Mozilla through Outreachy and the Mozilla Open Leader program.

WhatsApp Image 2018-04-15 at 18.54.18 (2)
The circular-open-seat arrangement

MozRetreat was a super-charged three-day event for planning, dreaming and collaborating. Here, we examined what works about the festival, challenged the old ways of doing things and explored new opportunities for protecting and promoting the health of the internet.

 

This is the 9th year for MozFest and the theme this year is ‘DATA’. There will be keynote sessions, lot of planned activities and impromptu conversations on the below 5 broad topics:

  1. Openness (Open innovation, all things OPEN)
  2. Privacy & Security
  3. Web Literacy
  4. Digital Inclusion
  5. Decentralisation

In addition, there will be a week of meetups, hackjams, and unconferences run by allied organizations.

At the end of the retreat we chose which space we wanted to wrangle. I chose to be a wrangler to the Open space! A wrangler is someone who designs and delivers a part of MozFest.

All in all, I enjoyed every minute spent with this amazing bunch of humble, smart and talented people. Over the next few months we would work collaboratively to deliver the best of MozFest.

Come and be a part of the festivities in London this October to celebrate Internet Health!

 

Princiya Marina Sequeira | P's Blog | 2018-04-15 18:52:39

UI/UX Designing.

Last week, I worked on adding Sticky Header functionality to a Nested Table of which each row contained a subtable. In this post, I’ll share the things I learned while implementing this functionality and a link to the code.

1. Avoid using fixed pixel dimensions for the width property

If you define it in px then it will have a fixed width. On the other hand, if you define it in % it will be relative to its containing element or the screen width which will make your web page responsive and scalable. Also, if you use percentages then you don’t have to calculate the exact pixel values which will save you a lot of time.

2. Understand the Stacking Context

You must have experienced that in some cases changing the z-index value to higher number doesn’t change the display order of HTML elements. This can happen when 2 HTML elements are in different stacking context.

You can read more about it here — Stacking Context

Sticky Header for Tables

There are 2 ways to implement this:

  • Internal scrolling — This can be done using the overflow CSS property. But in this case, the user will have to interact with 2 scroll bars on the page and having multiple scroll bars on a single page would confuse the users which would lead into a bad user experience.
  • Window Scrolling — Applied when the data to be scrolled is the main focus of the page. It should also be used when the Table has a nested Table because internal scrolling for multiple tables would further confuse the users.

Table without Sticky Header —

After Scrolling down, a new user would find it difficult to understand the Table values because he/she can no longer see the header. This is particularly a problem with large scale projects because of the huge data sets.

https://medium.com/media/48dc4797f54ca429a71fefe43c880b45/href

Table with Sticky Header

Making the Header stick to the top when the User scrolls down the page is a good way to present the information. Especially for new users, who need not to memorize the header content or scroll back and forth to be able to understand the table contents.

https://medium.com/media/1bacaa351eae3e4df6a49da5ca091047/href

Sticky Header Implementation— Link

Thanks for giving it a read!

Sejal Khatri | Stories by Sejal Khatri on Medium | 2018-04-12 05:42:55

On May 4th 2017, I was accepted as an Outreachy Intern at Mozilla. Since then, I have successfully completed my internship, attended two work weeks hosted by Mozilla, been an active, everyday contributor to Firefox and received an offer to be a Summer 2018 Intern at Mozilla’s Mountain View office. In this blog post, I talk about my experience with the application process of the Outreachy Internship Program, the internship itself and more.

Before I begin, here’s a link if you want to read more about the Outreachy Program.

How did I get to know about the Outreachy Program?
I got to know about the Outreachy Program in December 2015 from a couple of friends from college. I was in my first year of college and I had this urge to just DO something with all the free time I had during the week. So I reached out to this guy from my college, on Facebook, who’s now a very dear friend to me, after reading an answer he wrote about GSoC on Quora. From him, I learned about Outreachy and when the applications opened for Summer 2017, he urged me to apply for it and so I did.

What did I find most challenging about the application process?
The applications for the Outreachy Program expect aspiring interns to make a small contribution to a relevant FOSS project as an eligibility criteria. So halfway through March, 20 days before the application deadline, I anxiously set out to fix my first bug on Firefox. When I look back at it, I think that the most challenging part of my application process was that I had to pick an organization and a project to work on. I was inexperienced and didn’t know what to pick. In the end, I chose the project that I later got to work on as a part of my internship, simply because that was the only project with a description I understood and I felt I could do it.

What did I do during the application process?
I emailed the mentor, got myself assigned to a very simple bug and sent in a quick patch. From then on, I kept in touch with my mentor on IRC and solved a series of many small bugs with his help up until the day I was accepted. I feel like my goal at that point was to do more than just the bare minimum of submitting a single patch to become eligible. At some point in April, I remember thinking to myself that, even if I don’t get accepted in the end, all the effort I was putting into learning to contribute to Firefox was going to be worth it. I clearly knew that I was going to continue to contribute code to Firefox for a long time no matter the outcome.

What was it like to get accepted?
When I think back to that day, I remember how important it was to me that I get accepted as an Outreachy Intern. I had been sitting on the couch idly for about four hours waiting, when I finally got to know that I was accepted. Honestly, I was just relieved and happy.

How was my internship experience?
My internship experience was nothing short of amazing. No, seriously. I loved everything about my simple, beginner friendly project, Mozilla, Firefox and most of all, my amazing mentor and the people at Mozilla, so much so that I decided to just hang around as a volunteer even after my internship ended. I feel like I have learned so much since then.
Later that summer, I even got a chance to attend Mozilla’s All Hands Meeting in San Francisco and that definitely helped me get a better idea about the company and the people I worked with, which was great.

What did I like most about my internship?
I feel like I gained a lot from this internship. I learned how to collaborate with others on a large open source project, learned how to write code with good practices and most importantly, I learned not to be afraid at all to ask questions, thanks to my mentor who was super patient and very kind to me. Having a mentor like that and being able to learn by asking any questions I had, without any fear of judgement, is what I liked most about my internship.

What did I do after my internship ended?
As I had previously planned, I continued volunteering my time to Firefox. While it was my Outreachy project that introduced me to everything and taught me how to be a good contributor, I believe it was what I did after my internship ended, that has extended my education. I now contribute to Firefox on most days and the amount of knowledge and experience I have gained from it is unbelievable. I’d like to recognize the efforts of Johann here(my mentor from the Outreachy project) who has helped me learn so much and grow as a developer everyday. Thank you, Johann.

What I’m most looking forward to

I’m most looking forward to my internship with Mozilla this summer in their Mountain View office. Until then, I’d like to keep up with my contributions as a volunteer for the next two months and continue learning everyday.

Thank you for reading till the end.

Before I end my post here, I’d also like to extend my thanks to the organizers of the Outreachy Program, the Outreachy coordinators at Mozilla and friends who told me about this program and encouraged me to apply. Thank you!

Prathiksha G | Stories by Prathiksha G Prasad on Medium | 2018-04-11 19:08:24

LÀM SAO CÓ THỂ CHỌN CĂN HỘ CAO CẤP ĐÚNG NGHĨA?

Thế nào là một căn hộ đúng nghĩa với hai từ cao cấp? Dự án Akari CIty, khu độ thị An Lạc, sẽ giải đáp những thắc mắc này.

Theo các chuyên gia, một dự án căn hộ cao cấp phải hội tụ đầy đủ nhiều tiêu chí “cần & đủ” mà không phải ai cũng nắm rõ. Vì thế, hãy lưu ý những gạch đầu dòng quan trọng để chọn được căn hộ phù hợp nhất với nhu cầu và mục tiêu của mình và gia đình.

Diện tích

Trước tiên, bạn…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-04-08 04:20:54

Hindi Wiki Conference 2018 — A confluence of people working towards making Knowledge freely available in Hindi Language.

Third Hindi Wikipedia conference organized by the Hindi Wikipedia community.

Hindi Wikipedia Community

This January, I got the opportunity to attend Hindi Wikipedia Conference held in Delhi, India. I even got a chance to meet Wikipedians from Odia and Maithili community.

I explored many new things. Some of them being coordinating such large scale event and helping with the media coverage of the conference. One of the most important things I did was to create awareness about the advantages of using Programs and Events Dashboard. I also helped Maithili Wikipedians from Nepal by solving the problems they faced with the Dashboard.

As it is rightly said that, “A picture is worth a thousand words”, here are some pictures from the conference —

Meeting Yohann Thomas.
From left- Meeting Abhishek Suryawanshi, Ashish Bhatnagar and Krishna Chaitanya Velaga.
Meeting Rahul Deshmukh, Sukriti Dwivedi, Sushma Sharma, Shweta Yadav, Shreya Dwivedi and others..
Meeting Brijesh Vyas, Prateek Malviya and Biplab Anand.

Sejal Khatri | Stories by Sejal Khatri on Medium | 2018-04-06 17:01:54

Two weeks back I got the opportunity to attend the FOSSASIA Open Tech Summit 2018 using my Outreachy travel allowance which I received due to my internship in Outreachy with Sugar Labs during Summer’17. FOSSASIA is an organization developing software applications for social change using a wide-range of technologies. It has been participating in programs like GSoC to promote open source. The event was a 4 day conference with tracks about AI, Cloud, Blockchain, Web and more.

The day 1 concluded with a panel discussion with technologists and developers on the forthcoming opportunities and challenges in technology. The exhibition assisted me in exploring various companies and their ongoing projects. Also I collected a lot of goodies, which is a core motivating factor for students :P The UNESCO Hackathon provided with an opportunity to interact with the developers and create open source applications that could tackle sustainable development challenges.

Besides the tech, Singapore offers amazing places to visit — Universal Studios, Merlion Park and Gardens by the Bay to name a few.

Prachi Agrawal | Stories by Prachi Agrawal on Medium | 2018-04-05 12:53:24

I attended LibrePlanet 2018 in Boston last week. Thank you to Free Software Foundation & Outreachy, I received a travel grant to attend this event because of my participation in Outreachy last year.

LibrePlanet is an annual conference hosted by the Free Software Foundation for free software enthusiasts and anyone who cares about the intersection of technology and social justice. The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user freedom.

This year was LibrePlanet’s 10th anniversary and the theme was “Freedom. Embedded.”

The event took place on March 24 – 25, at Stata Center, Massachusetts Institute of Technology, Boston.

Day 0:

I had signed up to be a volunteer and we had a volunteer briefing session on Friday, 23rd March at the FSF office. My task was to be an IRC monitor for two hours on both the conference days.

Day 1:

The first day was overwhelming.

  • Firstly, this was a celebration of free software!
  • Secondly, the conference was at MIT! I was extremely happy to visit this place.
  • Thirdly, I got to meet Outreachy organisers in person 🙂

My favorite talks from Day 1 were the following:

  1. You think you are not a target? A tale of 3 developers by Chris Lamb: As software developers we are vulnerable to attacks like simple malware injection! Through an engaging story, this talk spoke about this growing threat and how reproducible builds can help prevent against it.

  2. State of the Onion: A handful of Tor contributors were at this panel. They spoke about how they added new features into the Tor browser, made Tor more usable, and growing the community with new outreach initiatives.

Screen Shot 2018-04-05 at 12.56.38

Day 2:

The second day’s starting keynote ended on this great piece of advice.

Screen Shot 2018-04-05 at 13.13.11.png

My favorite talks from Day 2 were the following:

  1. Diversity in Free Software by Marina, Outreachy co-organiser. I have written a more detailed post here.
  2. Freedom, devices & health. This panel introduced leaders that bridge industry, community, and individual experiences.

A more detailed report on LibrePlanet can be found here.

I had a great time by attending talks, interacting with the community and volunteering.

My laptop got more stickers 🙂

WhatsApp Image 2018-04-05 at 11.53.01

 

Princiya Marina Sequeira | P's Blog | 2018-04-05 11:59:03

Last week I attended LibrePlanet. Out of the many talks, ‘Diversity in Free Software’ by Marina, Outreachy co-organiser was my favorite <3. Here is an attempt to write sketchnotes from the talk. Yes, sketchnotes is a thing.

libreplanet_full
Diversity in free software

I used to fancy having an all-green Github dashboard 🙂

I have always wanted to contribute to opensource and last year I wanted to ensure that this task from my long pending todos was checked off. With Outreachy this was possible, which has been a great experience overall. I had earlier applied to the program in 2015 but didn’t get through. I then made sure to do the homework and it paid off last year.

The current round of Outreachy applications are closed. While most of you are awaiting results, don’t get disappointed if you do not get through.

There is always another chance for your hard work!

What is unique about Outreachy?

The application process asks you to make an initial contribution to the project you wish to apply. This gives you ample time to learn, understand about the project, get a feel of what it is to working in the open and with your mentor.

Selected applicants are encouraged to blog their activity every 2 weeks and here is curated list from Outreachy alumni. This is my most favorite reading list and all of my blogging motivation and inspiration comes from there.

Diversity in opensource

The diversity count for opensource is definitely improving! While Outreachy is one such medium to increase diversity in opensource, there are other initiatives too which you can see listed in the above cartoon.

The best thing what each one of us can do is to uplift each other. As the saying goes ‘Charity begins at home’. I like the quote from this article –

Little time for charity? Consider contributing to Open Source

What is it like to work in the open?

I still remember the days how I used to fret when I was sitting next to my boss during the first few weeks of my first job as a Programmer. I used to be constantly conscious during our pair programming sessions. As with anything else, with practice and time I got through this fear and anxiety.

Do something that you were never comfortable doing! That’s called getting over fear!!

Contributing to opensource was also something similar experience to start with. Our code is out for the public. When I made my first PR, I was both excited and nervous. I have written here how I overcame the Imposter Syndrome.

Practice makes one perfect!

One thing I must say, the opensource community is very welcoming. We are all on one shared mission – to learn && grow!

What next?

Working in the open has definitely made me more confident from what I started. There is no looking back from this point. My journey has just begun and I have a long way to go forward….

Screen Shot 2018-04-04 at 16.58.47

Here is a list of my opensource work so far.

Princiya Marina Sequeira | P's Blog | 2018-04-04 15:33:14

Dự án Akari City Bình Tân – Thông tin chi tiết

GIỚI THIỆU TỔNG THỂ DỰ ÁN CĂN HỘ AKARI CITY BÌNH TÂN
  • Tên dự án: Akari City Bình Tân
  • Vị trí dự án: Đại lộ Võ Văn Kiệt, Phường An Lạc, Quận Bình Tân, Tp.HCM
  • Chủ đầu tư: Nam Long Group
  • Đối tác hợp tác phát triển dự án (50% vốn): Hankyu Realty và Nishi Nippon RailRoad
  • Qui mô dự án Căn hộ Akari City
  • Tổng diện tích đất quy hoạch được cấp: 8,6 ha
  • Mật độ xây dựng chỉ: 33% (ưu tiên cho cây xanh và các…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-04-02 16:36:59

Follow up on my earlier article, here is a tech-cartoon explaining Web Tracking!

web-tracking-blog-overview-full

Read the entire article here.

Princiya Marina Sequeira | P's Blog | 2018-04-02 11:06:20

In the wake of the Facebook scandal where all of us are concerned and debating over the rise of online privacy, here is an attempt to explain how web tracking works in a nutshell.

What is web tracking?

Web tracking is the practice by which websites identify and collect information about users, generally in the form of some subset of web browsing history.

Is it evil?

Web tracking isn’t 100% evil, but its workings remain poorly understood. From the perspective of website owners and of trackers, it provides desirable functionality, including personalisation, site analytics, and targeted advertising.

Without trackers, an e-commerce website will have to treat every user as a stranger and unable to present personalised content.

What is the bad side?

After you switch websites, advertisements for products you’ve just looked at, or products you looked at a few weeks ago reappear! The greatest concern is when the trackers come from third-party websites.

This Twitter thread describes how much of our information is being collected by Google and Facebook.

Screen Shot 2018-04-02 at 07.06.02
Twitter thread describing data privacy issue

Are you visiting just one site?

Say for example, when you go to nytimes.com, the New York Times knows you’ve visited and which article you’re reading – in this case, the New York Times is a “first-party”. Because you choose to visit a first-party, we are not particularly concerned about what the first-party knows from your visit. A third-party tracker like doubleclick.net – embedded by nytimes.com to provide, for example, targeted advertising – can log the user’s visit to nytimes.com.

The number of trackers that exist in any website depend on what the website owner has decided.

What is third-party tracking?

Third-party web tracking refers to the practice by which an entity (the tracker), other than the website directly visited by the user, tracks or assists in tracking the user’s visit to the site.

Third-party trackers are creepy

Once there is one third-party on a page, that third-party has the ability to turn around and invite any number of other third-parties to the first-party webpage.

Your personal information is valuable and it’s your right to know what data is being collected about you – your age, income, family’s ages and income, medical history, dietary habits, favourite web sites, your birthday…the list goes on.

The trick is in taking this data and shacking up with third-parties to help them come up with new ways to convince you to spend money, sign up for services, and give up more information.  It would be fine if you decided to give up this information for a tangible benefit, but you may never see a benefit aside from an ad, and no one’s including you in the decision.

Tracking is not anonymous

You might think that this tracking is anonymous, since your real name is not attached to it. Many third-parties do know your real identity. For example, when Facebook acts as a third-party tracker they can know your identity as long as you’ve created a Facebook account and are logged in – and perhaps even if you aren’t logged in. It is also possible for a tracker to de-anonymise a user by algorithmically exploiting the statistical similarity between their browsing history and their social media profile.

Track the trackers!

Using Lightbeam, a privacy browser extension, discover who’s tracking you online while you browse the Web.

Screen Shot 2017-11-17 at 15.38.35
Lightbeam graph showing first and third-party website trackers

When you activate Lightbeam and visit a website, the browser extension creates a real time visualisation of all the third-parties that are active on that page. As you then browse to a second site, it highlights the third parties that are also active there and shows which third parties have seen you at both sites. The visualisation grows with every site you visit and every request made from your browser.

Conclusion

There is much more to web-tracking than what is written here. My curiosity for privacy started only last year when I interned with Outreachy – Mozilla for Lightbeam.

Stay safe  and make the internet a healthier place!

 

Princiya Marina Sequeira | P's Blog | 2018-04-02 09:47:01

The first part of the drawing with CSS tutorial was all about box shadows. They are pretty flexible when it comes to creating multiple rectangular and circular shapes, but there are some limitations.

Three main disadvantages of using only box shadows for drawing:

  • their rotation is the same as the rotation of the parent element
  • their size is defined relative to the parent element
  • there is a limit of three base shapes

In this blog post I want to focus on another tool for drawing with CSS - gradients. Gradients are images created with a function. In their most basic form, they are a progressive transition between two colors. Let’s take an initial look as to what that means, starting with a simple black square.

.boxy {
  height: 100px;
  width: 100px;
  
  background: black;
}

Now we can add different transitions using a simple linear gradient:

/* From top to bottom */
.boxy {
  height: 100px;
  width: 100px;

  background: linear-gradient(to bottom, black, white);
}
/* From bottom to top */
.boxy {
  height: 100px;
  width: 100px;
  
  background: linear-gradient(to top, black, white);
}
/* From left to right */
.boxy {
  height: 100px;
  width: 100px;
  
  background: linear-gradient(to right, black, white);
}
/* From right to left */
.boxy {
  height: 100px;
  width: 100px;
  
  background: linear-gradient(to left, black, white);
}

So how are gradients defined? Because they are images created with a function, they are applied to the background property, more specifically background-image. Before we go any further, let’s take a closer look at the some of the background properties:

  • background-color (keyword/RGB/HSL) - sets the background color of an element
  • background-image - sets one or more background images for an element, including gradients
  • background-position (keyword/px/em/percentage) - sets the position of the background
  • background-repeat (keyword) - defines how background images are repeated
  • background-size (keyword/px/em/percentage) - specifies the size of a background element
  • background-attachment (keyword) - defines the scrolling beahviour

Most of the time there is no need to set all of those properties at once and it’s possible to use a background shorthand to concisely define only those that are needed. All the other properties will be set to their initial values.

The background shorthand is defined as:

background: <attachment> <image> <position> <size> <repeat>;

and it can have multiple layers. The layers are positioned as a stack from the bottom-most declaration - the lowest layer is going to be at the bottom of the stack and the subsequent layers are going to cover it - which we can use for multi-layer drawings!

Linear gradients are defined with the linear-gradient() function that we can use in the background-image property. Let’s dive into the properties of that function:

  • <side-or-corner> or <angle> - the first property of a linear gradient defines where the transition between colors starts and ends, which can be defined either by keywords (to bottom, to right) or angles (180deg, 90deg)
  • two or more <color-stop> values - each has a color value (keyword/RGB/HSL) followed by an optional stop position (percentage/px/em), they define what colors and at what points are transitioning

The description on its own might not be entirely clear, so let’s look at some examples. To better understand the angle of a linear gradient, you can play with the fiddle below - it’s a linear gradient transitioning between three colors: red, pink and orange (the middle pink color is used to visualize the gradient line better).

Angle 45°

This gradient is defined as follows:

linear-gradient(
  <angle> deg,
  red 0%,
  #ff9a9e 50%, #ff9a9e 50%,
  orange 100%
);
  1. The <angle> value is taken directly from the value in the fiddle.
  2. The first color-stop declaration red 0% means that color red is the first color and it starts at the very beginning of the gradient line at 0%. Technically, the 0% here is superfluous, as it’s the initial value for the first color-stop.
  3. The second and third color-stop declarations define the pink strip - it starts and stops at 50%. The first color-stop is linearly transitioning from being pure red to being more and more #ff9a9e. Then the third color-stop is linearly transitioning from being pure #ff9a9e to being more and more orange - the last color stop.
  4. The last color-stop declaration defines the orange part of the gradient - it goes from where the last color-stop ends at 50% to the end of the gradient line at 100%. Technically 100% here is superfluous, as it’s the initial value for the last color-stop.

A few things worth mentioning at this point:

  • CSS gradients have no intrinsic dimensions - they define a color progression within the element they are applied to and don’t have a preferred size nor a preferred ratio. They repeat so as to fill the container. You can imagine them as stripes of color progressively transitioning from the first color to the last, placed along the gradient line.
  • CSS function calculates the color values in a way that makes the corners the full defined color - the starting points start at a perpendicular line between the corner and the gradient line - I found the MDN visualization useful for understanding how it works.
  • Color-stops can have values below 0% and above 100% - since linear gradients don’t have an intrinsic size and are defined along a gradient line, the stop values can be placed at any point along this line.

By now you probably understand pretty well how to use gradients as backgrounds, but how can they be used for drawing? Let’s gather some useful insights here:

  • an element can have multiple background images that can be layered on top of each other
  • each background image can have a precisely defined size (its bounding box) and position for the upper left corner
  • linear gradients can be defined with either percentages or pixels and can be used as background images

Before we get started with drawing, let’s explore the last point a bit more - a linear gradient can be defined using either percentages or lenghts.

The first gradient below is defined using percentages and the second one is using pixels. Initially, the angle is 0deg and they both look the same. But changing the angle is going to either change the width of the line (percentage gradient) or the position (pixel gradient). Depending on which variable we want to control, it’s sensible to use an appropriate approach for linear gradients.

/* Percentages */
linear-gradient(
  <angle> deg,
  white 0%, white 40%,
  orange 40%, orange 60%,
  white 60%, white 100%
);

/* Pixels */
linear-gradient(
  <angle> deg,
  white 0%, white 80px,
  orange 80px, orange 120px,
  white 120px, white 100%
);

Angle 0°

Now we have all the tools to draw something using linear gradients. Let’s work through an example, recreating the Haskell logo. It consists of a few simple linear elements and we’re going to start from the lower right part of the lambda as the base element. If you look at the image closely, you can see that the same element is repeated throughout the lambda and the preceeding chevron, simply mirrored at different angles.

.haskell {
  width: 156px;
  height: 144px;
  
  color: orange;
  
  background:
    linear-gradient(
      237.5deg,
      transparent 77px,
      currentColor 77px,
      currentColor 129px,
      white 129px
    );
}

Next let’s mirror the lower right part to create the bottom part of the lambda. We need more fine-grained control for positioning the elements, so let’s set the size of each background element to 156 by 144 pixels with background-size and make sure the background doesn’t repeat itself. Now we’re going to use an initial position for each background element, to make sure it’s in the right place, and double the width of the image.

.haskell {
  width: 312px;
  height: 144px;
  
  color: orange;
  
  background:
    linear-gradient(
      122.5deg,
      transparent 80px,
      currentColor 80px,
      currentColor 132px,
      transparent 132px
    ),
    linear-gradient(
      237.5deg,
      transparent 77px,
      currentColor 77px,
      currentColor 129px,
      white 129px
    ) 156px 0;
  
  background-size: 156px 144px;
  background-repeat: no-repeat;
}

Let’s add the upper part of the lambda - we need to move elements around a bit, to make sure they overlap properly. Also, now the image needs double the single element height.

.haskell {  
  width: 312px;
  height: 288px;
  
  color: orange;
  
  background:
    linear-gradient(
      237.5deg,
      transparent 77px,
      currentColor 77px,
      currentColor 129px,
      transparent 129px
    ) 65px 0,
    linear-gradient(
      122.5deg,
      transparent 80px,
      currentColor 80px,
      currentColor 132px,
      transparent 132px
    ) 65px 144px,
    linear-gradient(
      237.5deg,
      transparent 77px,
      currentColor 77px,
      currentColor 129px,
      white 129px
    ) 156px 144px;
  
  background-size: 156px 144px;
  background-repeat: no-repeat;
}

The next element of the logo is a chevron on the left side of the lambda. It’s made of the same base elements as the lambda, just mirrored a bit. Let’s increase the size of the image, move the lambda a bit to the right and add a black chevron.

.haskell {
  width: 345px;
  height: 288px;
  
  color: orange;
  
  background:
    /* black chevron */
    /* upper part */
    linear-gradient(
      237.5deg,
      transparent 77px,
      #101c24 77px,
      #101c24 129px,
      transparent 129px
    ) 0 0,
    /* lower part */
    linear-gradient(
      122.5deg,
      transparent 80px,
      #101c24 80px,
      #101c24 132px,
      transparent 132px
    ) 0 144px,

    /* lambda */
    /* upper part */
    linear-gradient(
      237.5deg,
      transparent 77px,
      currentColor 77px,
      currentColor 129px,
      transparent 129px
    ) 85px 0,
    /* lower left part */
    linear-gradient(
      122.5deg,
      transparent 80px,
      currentColor 80px,
      currentColor 132px,
      transparent 132px
    ) 85px 144px,
    /* lower right part */
    linear-gradient(
      237.5deg,
      transparent 77px,
      currentColor 77px,
      currentColor 129px,
      white 129px
    ) 176px 144px;
  
  background-size: 156px 144px;
  background-repeat: no-repeat;
}

The last part of the image are two parallel lines on the right. We need to add a bit of space to fit the lines to the right and add two linear gradients with either 0deg or 180deg.

.haskell {
  width: 380px;
  height: 288px;
  
  color: orange;
  
  background:
    /* black chevron */
    /* upper part */
    linear-gradient(
      237.5deg,
      transparent 77px,
      #101c24 77px,
      #101c24 129px,
      transparent 129px
    ) 0 0,
    /* lower part */
    linear-gradient(
      122.5deg,
      transparent 80px,
      #101c24 80px,
      #101c24 132px,
      transparent 132px
    ) 0 144px,

    /* lambda */
    /* upper part */
    linear-gradient(
      237.5deg,
      transparent 77px,
      currentColor 77px,
      currentColor 129px,
      transparent 129px
    ) 85px 0,
    /* lower left part */
    linear-gradient(
      122.5deg,
      transparent 80px,
      currentColor 80px,
      currentColor 132px,
      transparent 132px
    ) 85px 144px,
    /* lower right part */
    linear-gradient(
      237.5deg,
      transparent 77px,
      currentColor 77px,
      currentColor 129px,
      white 129px
    ) 176px 144px,
    
    /* black lines */
    /* upper line */
    linear-gradient(
      180deg,
      white 80px,
      #101c24 80px,
      #101c24 132px,
      white 132px
    ) 220px 0px,
    /* lower line */
    linear-gradient(
      180deg,
      white 12px,
      #101c24 12px,
      #101c24 64px,
      white 64px
    ) 220px 144px;
  
  background-size: 156px 144px;
  background-repeat: no-repeat;
}

Now it’s almost done - we just need to add two more white base elements between the orange lambda and the black lines to visually separate them.

.haskell {
  width: 380px;
  height: 288px;
  
  color: orange;
  
  background:
    /* black chevron */
    /* upper part */
    linear-gradient(
      237.5deg,
      transparent 77px,
      #101c24 77px,
      #101c24 129px,
      transparent 129px
    ) 0 0,
    /* lower part */
    linear-gradient(
      122.5deg,
      transparent 80px,
      #101c24 80px,
      #101c24 132px,
      transparent 132px
    ) 0 144px,

    /* lambda */
    /* upper part */
    linear-gradient(
      237.5deg,
      transparent 77px,
      currentColor 77px,
      currentColor 129px,
      transparent 129px
    ) 85px 0,
    /* lower left part */
    linear-gradient(
      122.5deg,
      transparent 80px,
      currentColor 80px,
      currentColor 132px,
      transparent 132px
    ) 85px 144px,
    /* lower right part */
    linear-gradient(
      237.5deg,
      transparent 77px,
      currentColor 77px,
      currentColor 129px,
      white 129px
    ) 176px 144px,
    
    /* white spacer */
    /* upper part */
    linear-gradient(
      237.5deg,
      transparent 77px,
      white 77px,
      white 129px,
      transparent 129px
    ) 110px 0,
    /* lower part */
    linear-gradient(
      237.5deg,
      transparent 80px,
      white 80px,
      white 132px,
      transparent 132px
    ) 206px 144px,
    
    /* black lines */
    /* upper line */
    linear-gradient(
      180deg,
      white 80px,
      #101c24 80px,
      #101c24 132px,
      white 132px
    ) 220px 0px,
    /* lower line */
    linear-gradient(
      180deg,
      white 12px,
      #101c24 12px,
      #101c24 64px,
      white 64px
    ) 220px 144px;
  
  background-size: 156px 144px;
  background-repeat: no-repeat;
}

You can edit this drawing interactively on CodePen.

Alicja Raszkowska | Alicja Raszkowska | 2018-04-01 16:55:00

Marvel’s Avengers Infinity war hand shirt

when everyone says “you have a baby on the way, you should try to start watching what you say”. Oh lighten the F*bomb up. It doesn’t say they are using it in front of kids it’s a funny Marvel’s Avengers Infinity war hand shirt. at a glance scrolling through I thought this was best t-shirt. Then I read the shirt and looked again should have been part of the whole mom experience this shirt – to…

View On WordPress

Julia Lima | Thu thm t uy tn | 2018-03-30 11:14:58

Photo by Vladimir Kudinov on Unsplash

Today, we’re going to look at the paper titled “Robustness in Complex Systems” published in 2001 by Steven D. Gribble. All pull quotes and figures are from the paper.

This paper argues that a common design paradigm for systems is fundamentally flawed, resulting in unstable, unpredictable behavior as the complexity of the system grows.

The “common design paradigm” refers to the practice of predicting the environment the system will operate in and its failure modes. The paper states that a system will deal with conditions that weren’t predicted as it becomes more complex, so it should be designed to cope with failure gracefully. The paper explores these ideas with the help of “distributed data structures (DDD), a scalable, cluster-based storage server.”

By their very nature, large systems operate through the complex interaction of many components. This interaction leads to a pervasive coupling of the elements of the system; this coupling may be strong (e.g., packets sent between adjacent routers in a network) or subtle (e.g., synchronization of routing advertisements across a wide area network).

A common characteristic that such large systems exhibit is something known as the Butterfly Effect. This refers to a small unexpected disturbance in the system resulting from the intricate interaction of various components that causes a widespread change.

A common goal for system design is robustness: the ability of a system to operate correctly in various conditions and fail gracefully in an unexpected situation. The paper argues against the common pattern of trying to predict a certain set of operation conditions for the system and architecting it to work well in only those conditions.

It is also effectively impossible to predict all of the perturbations that a system will experience as a result of changes in environmental conditions, such as hardware failures, load bursts, or the introduction of misbehaving software. Given this, we believe that any system that attempts to gain robustness solely through precognition is prone to fragility.

DDS: A Case Study

The hypothesis stated above is explored using a scalable, cluster-based storage system, Distributed Data Structures (DDD) — “a high-capacity, high-throughput virtual hash table that is partitioned and replicated across many individual storage nodes called bricks.”

This system was built using a predictive design philosophy as the one described above.

Based on extensive experience with such systems, we attempted to reason about the behavior of the software components, algorithms, protocols, and hardware elements of the system, as well as the workloads it would receive.

When the system operated within the scope of the assumptions made by the designers, it worked fine. They were able to scale it and improve performance. However, in the case when one or more of the assumptions about the operating conditions were violated, the system behaved in unexpected ways resulting in data loss or inconsistencies.

Next, we talk about several such anomalies.

Garbage Collection Thrashing and Bounded Synchrony

The system designers used timeouts to detect failure of components in the system. If a particular component didn’t respond within the specified time, it was considered dead. They assumed bounded synchrony in the system.

The DDS was implemented in Java, and therefore made use of garbage collection. The garbage collector in our JVM was a mark-and-sweep collector; as a result, as more active objects were resident in the JVM heap, the duration that the garbage collector would run in order to reclaim a fixed amount of memory would increase.

When the system was at saturation, even slight variations in load on the bricks would increase the time taken by the garbage collector in turn dropping the throughput of the brick. This is called GC thrashing. The affected bricks would lag behind their counterparts leading to a further degradation in performance of the system.

Hence, garbage collection violated the assumption of bounded synchrony when it was nearing or beyond the saturation point.

Slow Leaks and Co-related Failure

Another assumption made while designing the system was that the failures are independent. DDS used replication to make the system fault-tolerant. The probability of multiple replicas failing simultaneously was very small.

However, this assumption was violated when they encountered a race condition in their code that caused a memory leak without affecting correctness.

Whenever we launched our system, we would tend to launch all bricks at the same time. Given roughly balanced load across the system, all bricks therefore would run out of heap space at nearly the same time, several days after they were launched. We also speculated that our automatic failover mechanisms exacerbated this situation by increasing the load on a replica after a peer had failed, increase the rate at which the replica leaked memory.

Since all the replicas were subjected to a uniform load without taking performance degradation and other issues into consideration, this created a coupling between the replicas and…

…when combined with a slow memory leak, lead to the violation of our assumption of independent failures, which in turn caused our system to experience unavailability and partial data loss

Unchecked Dependencies and Fail-stop

Based on the assumption that if a component timed out, it has failed, the designers also assumed “fail-stop” failures, that is a component that has failed will not resume functioning after a while. The bricks in the system performed all long-latency work (disk I/O) in an asynchronous way.

However, they failed to notice that some parts of their code made use of blocking function calls. This caused the main event-handling thread to be randomly borrowed leading to bricks seizing inexplicably for a couple of minutes and resuming post.

While this error was due to our own failure to verify the behavior of code we were using, it serves to demonstrate that the low-level interaction between independently built components can have profound implications on the overall behavior of the system. A very subtle change in behavior resulted in the violation of our fail-stop assumption across the entire cluster, which eventually lead to the corruption of data in our system.

Towards Robust Systems

..small changes to a complex, coupled system can result in large, unexpected changes in behavior, possibly taking the system outside of its designers’ expected operating regime.

A few solutions which can help us make more robust systems:

Systematic Over-provisioning

When approaching the saturation point, systems tend to become fragile when trying to accommodate unexpected behavior. One way to combat this is to deliberately over-provision the system.

However, this has its own set of issues: it leads to the under-utilization of resources. It also requires predicting the expected operating environment and hence the saturation point of the system. This can’t be done in an accurate manner in most cases.

Use Admission Control

Another technique is to start rejecting load once the system starts approaching the saturation point. However, this requires predicting the saturation point — something that’s not always possible, especially with large systems which have a lot of contributing variables.

Rejecting requests also consumes some resources from the system. Services designed with admission control in mind usually have two operating modes: normal where the requests are processed and an extremely lightweight mode where they’re rejected.

Build Introspection into the system

an introspective system is one in which the ability to monitor the system is designed in from the beginning.

When a system can be monitored, and designers and operators can derive meaningful measurements about its operation, it’s much more robust than a black-box system. It’s easier to adapt such a system to change in its environment, as well as manage and maintain it.

Introduce adaptivity by closing the control loop

An example of a control loop is human designers and operators adapting the design in response to a change in its operating environment indicated through various measurements. However, the timeline for such a control loop isn’t very predictable. The authors argue that systems should be built with internal control loops.

These systems incorporate the results of introspection, and attempt to adapt control variables dynamically to keep the system operating in a stable or well-performing regime.
All such systems have the property that the component performing the adaptation is able to hypothesize somewhat precisely about the effects of the adaptation; without this ability, the system would be “operating in the dark”, and likely would become unpredictable. A new, interesting approach to hypothesizing about the effects of adaptation is to use statistical machine learning; given this, a system can experiment with changes in order to build up a model of their effects.

Plan for failure

Complex systems must expect failure and plan for it accordingly.

A couple of techniques to do this:

  1. decoupling of components to contain failures locally
  2. minimize damage by using robust abstractions such as transactions
  3. minimize amount of time in failure state (using checkpointing to recover rapidly)

In this paper, the authors argue that designing systems by assuming the constraints and nature of its operation, failures, and behavior often leads to fragile and unpredictable systems. We need a radically different approach to build systems that are more robust in the face of failure.

This different design paradigm is one in which systems are given the best possible chance of stable behavior (through techniques such as over-provisioning, admission control, and introspection), as well as the ability to adapt to unexpected situations (by treating introspection as feedback to a closed control loop). Ultimately, systems must be designed to handle failures gracefully, as complexity seems to lead to an inevitable unpredictability.

Robustness in Complex Systems: an academic article summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2018-03-26 03:49:52

Life update time! After three great years at Spring, I’m moving on to my next adventure; in April, I’ll be joining Windmill as their fifth engineer, where I’ll be building cloud-based developer tools, as well as company culture.

The most frequent question I’ve got from people upon telling them this news (after “what the heck is Windmill?!”) is, “How did you prepare for your interviews?” (And all the related questions: “How long did you prep for?”, “What were your on-sites like?”, etc.)

This post aims to answer those questions. Here’s some detail on my own personal interview prep and interview process. Of course, everything here should be taken with a grain of salt, as all of this will vary widely based on you and your skills, the sorts of companies you’re interviewing for, etc.

How did you prep?

Resume/Soft Skills

The first thing I did (because it felt like the least intimidating way to ease into the job search process) was to update my resume. I made a big list of all the things I’d done at work since I last updated my resume, then went back over that list, pulled out the most compelling and impressive items, and wrote bullets for them. I also got several rounds of feedback from friends and collegues in the tech industry. You can never have too many eyes on your resume.

I also practiced talking about my work and my projects, and answering so-called “soft skills” questions. Cracking the Coding Interview has a handy grid to help you organize your thoughts about your projects:

From Cracking the Coding Interview (4th ed.), p. 23

I didn’t physically fill this out, because I’m lazy—but I did go over all of this in my head, making sure I had an idea of the interesting, challenging, and conversation-worthy bits of the major bullet points on my resume.

I’ve been interviewing candidates at work for 2ish years now, so I have some idea of the questions that get asked in interviews and could think ahead about my answers to those1: but by far the most useful prep I did here was to have people mock-interview me. Having to answer these questions out loud forced me to think about them more concretely, and I could also get feedback on the things that I was saying that were more or less impressive, red-flag-y, etc. (“If they ask you about conflicts with coworkers, tell story A, not story B, cuz in story A you resolved your disagreement, and in story B you were right and your coworker wrong and the system broke because of it, but even though you were right, it still resulted in the system breaking.”)

Preparing YOUR Interview Qs

Another thing worth doing is preparing a list of questions that you want to ask of your interviewers to get a better idea about the company you may possibly work for. The better your questions for them, the more insight you can get into their workplace—and the more prepared you are with these questions, the less dumb you look in front of your interviewer.

I read a bunch of “questions to ask in job interviews” articles, grilled my friends, and put together a big doc of the questions I thought were the most informative. Some useful blog posts on the topic: * https://jvns.ca/blog/2013/12/30/questions-im-asking-in-interviews/ * http://juliepagano.com/blog/2015/08/15/job-search-retrospective/#interview-questions * http://lizabinante.com/blog/getting-hired-without-getting-burned/

Algorithms

To brush up on my algos, I worked through Problem Solving with Algorithms and Data Structures using Python, making sure to ask questions about/dig deeper into anything I didn’t understand. Most of this I skimmed b/c I was already familiar with it, but I made sure to note the architypal problems for each data type (“oh, it’s an X problem? You should use data structure Y!”), and paid special attention to trees and graphs cuz they’re my weak spot.

And then it was just looots of practice problems. Some from Cracking the Coding Interview, some from LeetCode, and as much practicing with friends as I could get. The best is to get friends who actually interview developers, because they know the ins and outs of their questions better, but even just having a friend mock-interview you with CtCI or LeetCode questions will do.

Practicing algos Qs on your own just thinking through the solution, or writing the solution on a computer, is good. Writing it out on paper/a whiteboard is better. Practicing with a person (in a mock interview sort of capacity, where you’re timed, they’re giving you hints where necessary but pushing you to explain yourself) is best.

System Design

Like with algos, mostly this was a matter of practice—getting friends to mock interview me with system design quesitons. I also went over some of the systems I built at work and made sure that I understood the technical choices and trade-offs there, so I could talk intelligently about it in interviews.

A piece of advice I got here (which I didn’t end up taking, and it all turned out okay, but I probably should have done this anyway) was to talk to folks at other companies about their system architecture. My Achilles’ heel in system design interviews has always been that I’m only really familiar with the one or two paradigms I’ve worked in—talking to others’ about their paradigms would have been super useful, and given me a lot more ideas to draw on.

How long did you prep for?

I took two weeks off of work around Christmas/New Years, and was doing a bit of prep work every day—sometimes several hours of reading or practice interviews, sometimes 15 minutes researching a company or brushing up my resume, but I tried to do something every day. After that, I went back to my day job and I did a bit whenever I could—some problems on LeetCode in the evening, or a practice problem coffee date with a friend over the weekend. After a few weeks of occasional practice interviews, I felt pretty prepped. My process was pretty drawn out cuz I took my time to find companies I wanted to interview for, set up those interviews, etc., so I would keep doing the occasional practice problem during that to keep in shape, but mostly I felt pretty well prepared.

What was your interview schedule like?

Like I said, I took a while at the beginning of this process to get my resume in order, do some research on companies and write up a big list, brush up on algos, etc.

I lined up a couple of phone screens early to get them out of the way (staggering them so that I wasn’t too absent at work), and punted on the on-sites so I could cluster those around the same time.

I highly recommend having your first on-site (or two) be a “warm-up”: not your dream job, but either one you don’t feel too strongly about, or one that is a real long shot and you’re not really banking on. The idea is to have your first interview or two be a little lower stress and lower stakes. Best case scenario, you get an offer and have more leverage or a fab opportunity you didn’t count on; worst case, you get to ease yourself back into the sometimes-grueling world of on-site technical interviews.

How did you decide where to interview?

The way I decide many other things: I made a big ol’ doc where I kept track of recommendations from friends, collegues, the Internet, etc. I brainstormed products I’d be interested to work on, and places friends had worked or places I’d seen on the internet with cultures that I liked. I looked up the companies of people I enjoy on Twitter and elsewhere on the interwebs. I took suggestions from the amaaazing jobs team at RC.

And then I dug into those companies. Looked for articles on company culture, their approach to tech problems, their thoughts on diversity and whether they had women and PoC on their engineering team/in management, how many blog posts they had by women/PoC, etc. I looked on Glassdoor. If I had friends there, or friends of friends, I reached out to them to ask them if they liked it there. I hit up people in my network (esp. women) for word-on-the-street. And from all that, sortakinda got a picture of these companies and which I was or wasn’t interested in moving forward with.

(I should note that an important part of this process was being clear on what I wanted out of my next company. Having a concrete idea of my own values helped me know what things to look for, which questions to ask, and which things were dealbreakers. It’s worth spending a good chunk of time on this step, as it will inform the rest of your search. In particular, I found Key Values’ Culture Queries a useful place to start.)

Then what?

Well hopefully, the offers start rolling in! Be transparent with everyone in your interview process about where you’re at (esp. with other companies), what your concerns are, and what you need from them. Stating your needs going in sets a good precedent!

When you start thinking about switching jobs, be prepared for lots of last minute insurance submissions and affair-getting-in-order, and remember to take any relevant documents off your computer/email/google drive, any relevant passwords off your password manager, etc. (Pro tip: if you know you’ll be leaving early-ish in the year, max out your Flex Spending Account and you’ll end up with free money! It’s a good time to buy those prescription sunglasses you don’t reaaally need, but heck, they’re free!)

As for me, I’m taking a month off and chilling, and so very excited about it. And thennnnn… I start my new job!

I’m pretty psyched about this next chapter in my life, and I hope that some of the thoughts above are of use to someone. Best of luck, y’all!


  1. Things like: “How do you learn a new language or framework?”; “Tell me about a time that you failed”; etc.

Maia Remez McCormick | Maia McCormick | 2018-03-25 15:17:41

The seventh and final book in the Harry Potter series, Harry Potter and the Deathly Hallows was published in July 2007 and sold 11 million copies worldwide within 24 hours. This makes it the fastest selling book in history . I was one of those dedicated fans who stood in line at midnight to get my hands on the book. I read the entire book in one sitting immediately after getting home. It was that good.

My first Python project is text analysis of the Harry Potter series. I’m familiar with the data, which in the scientific world is called domain knowledge. Hopefully, this improves the accuracy of the analysis.

Here is my code.

Goals for this project:

  1. Reinforce the Python fundamentals I’ve learned in the past few weeks by writing simple, easy to read code
  2. Use as few libraries as possible (to learn how to do things the hard way)
  3. Incorporate Amazon storage, one external API to pull data from and a visualization library
  4. Discover some interesting insights that maybe no one else in the world has realized about the Harry Potter series

I purposely did not use the Python Natural Language Toolkit (NLTK) because I wanted to learn how to write python code from scratch and not completely lean on what has already been written by others. I’ll probably revisit this project and run more sophisticated analysis using 3rd party libraries in the future.

Here are the titles of each Harry Potter book, the year it was released in the US, its size, and what I refer to it as:

  • Harry Potter and the Sorcerer’s Stone — 1998/0.4 MB (HP 1)
  • Harry Potter and the Chamber of Secrets — 1999/0.5 MB (HP 2)
  • Harry Potter and the Prisoner of Azkaban — 1999/0.7 MB (HP 3)
  • Harry Potter and the Goblet of Fire — 2000/1.2 MB (HP 4)
  • Harry Potter and the Order of the Phoenix — 2003/1.6 MB (HP 5)
  • Harry Potter and the Half-Blood Prince — 2005/1 MB (HP 6)
  • Harry Potter and the Deathly Hallows — 2007/1.2 MB (HP 7)

Get Data Into Notebook

First, I make the 7 HP files accessible from a Databricks Notebook, which is my coding environment. So, I copy the 7 files to Amazon S3 storage and use a Spark cluster to pull the files down from S3 into my cluster’s local file system. From there I can write normal python i/o code to read the files from the local disk.

Bag of Words

For this analysis I use a ‘bag of words’ approach where each book’s text is represented as a bag with words from that particular book. For example, if there are 75,000 words total in HP 1, then its bag of words contains 75,000 comma separated words. The implication here is that each word is analyzed individually, not taking into account word order or grammar.

Bag of words analysis is an effective way to filter out spam emails. For example, emails that contain the words “Viagra”, “cash”, and “free” no matter the order they appear in, are more likely to be spam than emails which do not contain these words.

To transform the data into a bag of words, my code:

  • removes end of line (/n) characters from the text
  • splits the text by whitespace to break into words
  • lowercases all text
  • removes all punctuation

This part took an absurdly long time as I ran into numerous formatting issues and spent many hours learning about unicode characters. This is normal for data analysts — sometimes cleaning data and getting it in a useable format (called data wrangling or munging) takes just as much time as the analysis itself.

It is possible to derive many new data sets from the bag of words. Here is a screenshot of top word frequencies for Harry Potter and the Sorcerer’s Stone:

As you can see, most of the top words such as the, of and to are fluff words that don’t add much context. These are called stop words. It is common in Natural Language Processing (NLP) to remove stop words. Here’s the list of English stop words I remove for part of the analysis.

Here’s the revised top words frequencies without stop words, which gives a slightly better understanding of which words are actually important:

Some NLP libraries use stemming which transforms each word to its ‘stem’. For example, running, runs, and runner would be changed to run as they are the same base word. This reduces the number of words in the bag of words, and may be more accurate for analysis. I’ll probably implement stemming when I revisit this code.

Word Counts

Below are total and unique word counts for each book.

The number of total words increases from HP 1 to HP 5 (HP 5 being the largest book), drops at HP 6 and increases slightly for HP 7.

The number of unique words also increases with the series. In HP 1 there are 5687 unique words compared to 12624 unique words in HP 5, so the writing is more sophisticated or J.K. Rowling is introducing new words (could be character names, new locations, spells, etc.) to readers.

Overall, the later books in the series have significantly more words than the first couple books. Here are some predictions why:

  1. The first book may the shortest because J.K. Rowling was not a proven author when it was published. Perhaps it would have been harder to get a publishing company to read her manuscript if it was longer.
  2. The Harry Potter series is intended for a young adult audience and the average audience age grew with each book. For reference, I read the first book in fourth grade, so I was eight years old. I read the seventh book at age sixteen. At age eight, I wasn’t as likely to read a large book with over a quarter million words as I was at age twelve, when Harry Potter and the Order of the Phoenix was published.
  3. As the series unfolds, the plot thickens. Maybe all those extra words are necessary for the most enchanting story.
  4. After the first couple books, there was a loyal fan base and demand for the next book in the series. The number of fans only increased as the series continued. Once there was a real demand for more books, maybe J.K. Rowling had more control and flexibility in her work. By the way, J.K. Rowling is the first author to become a billionaire by writing books!

Punctuation Analysis

Readers often overlook punctuation in text, but I thought it might be interesting to look at the punctuation in my data. How many period marks are in each book? How about exclamation marks? Question marks? Does sentence length increase as the series progresses?

This chart’s trend looks similar to the word count chart (HP 5 has the most punctuation and the most words).

There’s significantly more period marks than question marks or exclamation marks. I assume this is similar to a lot of literature. For my analysis, I ignore period marks after Mr. and Mrs. and ellipses to avoid inflating the number of sentences ending in period marks.

In English, there are four types of sentences:

  • Declarative sentences state facts and end with a period mark.
  • Imperative sentences give commands and typically end with a period mark, but can also end with exclamation marks. For my code, I’ll assume imperative sentences only end with period marks.
  • Interrogative sentences ask questions and end in question marks.
  • Exclamatory sentences express excitement and end in exclamation marks.

J.K. Rowling’s sentence length stays fairly constant throughout the series. However, as you can see in the word counts and punctuation analysis charts above, there are more sentences overall in HP 5.

Sentiment Analysis

Sentiment Analysis (or opinion mining) uses NLP to determine if text is positive, negative, or neutral. This is used for binary decisions such as good or bad and like or dislike. Example use cases are Yelp analyzing whether a restaurant has good reviews or bad reviews or a marketing department mining tweets to understand their consumers’ view on a new product launch.

The sentiment analysis in my code is oversimplified, which creates some error. Consider this sentence.

The movie was neither funny, nor super witty.

A human reads this and can understand it is a negative review. In my ‘bag-of-words’ approach, this sentence is rated positive because the words funny, super, and witty are positive. There are more sophisticated models which take these instances into account and can understand this should be marked negative, but because I didn’t want to use the NLTK library, I decided this is a fair tradeoff for my own learning.

One such example of this oversimplification in my code is the word just, which is the most frequently occurring positive word in each HP book. There are multiple definitions of just and some are positive —1. based on or behaving according to what is morally right and fair and 2. exactly. However, there are alternate definitions of just which are not positive, but my code has no way to understand this and simply counts every instance of just as positive.

Another possible issue is the lack of weighted words as more positive or more negative. So the words terrible and odd are both equally negative, even though terrible has a much stronger negative connotation.

Here’s the list of positive words and negative words I used. Since these are general positive and negative words for the English language, the code has no way to know the word mudblood is negative and the word patronus is positive.

Notice the Y-axis above for number of words. HP 5 and HP 7 seem to be darker books than HP 1 or HP 2.

Looking at overall words, there are very few positive or negative words present. On average, only about 2.9% of words are positive, 3.3% are negative and the rest are neutral. This is probably so low because there aren’t any Harry Potter specific positive and negative words in my positive and negative word lists. If only there was a way to programmatically determine if a word is positive or negative based on the surrounding text…. hmm, sounds like a fun project for the future.

N-grams

An N-gram is a continuous sequence of N words in a given text. N-grams of size N=1 are called unigrams, N=2 are bigrams, N=3 trigrams, and anything larger is referred to by the value of N so four-grams, five-grams, etc.

Here is the first sentence in Harry Potter and the Prisoner of Azkaban.

Harry Potter was a highly unusual boy in many ways.

If N= 2, here are the bigrams:

If N= 3, here are the trigrams:

The number of N-grams for any given sentence can be calculated by the equation below.

Here are some N-grams charts for the series.

Best Friend Trigrams

I don’t know about you, but I was always curious to see if Hermione would date Harry or Ron. Interestingly, by applying text analysis on the books, I could have predicted that Ron and Hermione would date since their names appear together so often.

Magical Locations N-Grams

This chart shows that Hogwarts is mentioned more than other magical locations. It’s interesting to see the Ministry of Magic become important in the middle of the series. I was surprised to see its frequency drop for HP 6 and HP 7, but I think that’s because it starts being referred to as just “the Ministry” and not by its full name.

Hogwarts Courses N-Grams

This charts compares courses taught at Hogwarts School of Witchcraft and Wizardry. Defense Against the Darks Arts and Potions appear to be the most important. The five-gram Defense Against the Dark Arts peaks in frequency during HP 5. This is probably due to the Harry versus Umbridge battle in this book. In HP 6, the Potions course is the most important as Harry excels afters finding the Half-Blood Prince’s textbook. All the courses drop close to zero for HP 7 since Harry does not attend Hogwarts this year.

Character Relationship Analysis

One of my goals for this project was to use an external API since it’s probably something I should be comfortable with for a data analyst job. I decided to use the Wikidata API to programmatically populate a list of all the characters within the Harry Potter Universe using a SPARQL query.

One important thing to note is that this may not be completely accurate. I’m assuming Wikidata has every character in the series and all the names are spelled correctly, which is not guaranteed.

Character relationship analysis was pretty complicated to wrap my head around. I went through a couple different approaches and got stuck before finally discovering the solution I implemented. Here’s a couple of the initial ideas I played around with:

  • Approach 1: Put each instance of every character name into a list in the order it appears in the text and run some average distance calculation to determine the number of people between two characters. However, this doesn’t take into account how many other non-character-name words are in between the character names, so if that number is large then the characters aren’t really close and the code assumes they are.
  • Approach 2: Break my text into a bag-of-paragraphs and determine which characters frequently appear in the same paragraphs together. When I wrote and tested this code, it didn’t appear very accurate. I think this is because the paragraph lengths vary drastically and sometimes a paragraph can be just a single line response in a conversation.

I had initially assumed character analysis is an equal relationship between two characters. After thinking about this a lot and hacking on the code I wrote for the approaches above, I realized relationship strength is two-directional. So Harry Potter’s relationship to Sirius Black may be different than Sirius Black’s relationship to Harry Potter.

Eventually, I decided the best approach is to first pick a viewpoint character and target character (character whose relationship to the viewpoint character we are analyzing). Let’s pick Harry Potter as the viewpoint character and Sirius Black as the target character. Then, I capture every instance where “Sirius” appears within 40 words before and 40 words after every instance of “Harry” and compare this number to the total number of times “Harry” appears in the text. This assumes the characters have a relationship (whether it’s positive or negative is another question), because they are either physically close at that time or one is mentioning the other in conversation.

Harry and Sirius’s relationships for the 7 books

The output shows there was no relationship at all between Harry and Sirius in the first two books. Since Sirius was only introduced as a character in the third book, this makes sense. The 8.88% from Harry to Sirius in HP 3 can be read as “when the word Harry appears, if you look 40 words to the left and 40 words to the right, 8.88% of the time Sirius also appears”. Similarly, Sirius’ relationship to Harry in HP 3 can be described as “69.17% of the time Sirius appears, Harry appears within 40 words”. I’m making the assumption that if names appear within 40 words of each other, they are together or talking about each other (hence, strengthening their relationship score).

Harry’s relationship to Sirius drops to 7.37% in HP 4. This is probably because Sirius spends most of HP 4 in hiding and does not see Harry (or anyone) often. Their communication is mainly via an occasional owl post. In HP 5 their relationship grows stronger, likely due to the time spent together at the Order of Phoenix headquarters at the beginning of the book. Sadly, after Sirius’ tragic death at the end of HP 5, Harry’s relationship score to Sirius dwindles to around 2% for the rest of the series. It doesn’t drop to 0% because Harry still speaks about Sirius to others, spends time in Sirius’ home and mourns Sirius’ death.

In each relationship I analyze, the other character’s relationship scores to Harry is much stronger than Harry’s relationship score to them. Since Harry is the main character and the majority of the story takes places from Harry’s narrative, this makes sense. Harry’s name appears significantly more than any other name in the series. So even after Sirius’ death, when Sirius’ name appears in the text, it is around Harry’s at least 80% of the time.

Below is a chart which shows Harry’s overall relationship score to and from other major characters. The size of the circles correspond to the number of times that name appears in the 7 books and the thickness of the line represents the relationship score.

It appears that Harry is closest to Hermione and Ron, which is accurate.

There are a few edge cases my code doesn’t account for. For instance, say Harry’s name appears at the very end of a chapter and the next chapter is a completely different place or time, but Sirius’ name appears within the first 40 words of that next chapter. My analysis still assumes they are either together or talking about each other, even though that may not be the case. However, I think the error rate here is low enough to be acceptable.

Also, my code searches for only one name per character, so Sirius Black is only referenced by Sirius in the code, even though in the text he may sometimes be mentioned by only last name. In the future, I’ll update my code to allow for nicknames and multiple names for characters. Right now, I picked whichever name I think the character is referenced by most in the text (ex: Dumbledore for Albus Dumbledore).

Here are some of the relationship scores:

Haven’t gotten enough Harry Potter knowledge yet? Then, follow my blog as I explore the Harry Potter Universe through different projects!


Harry Potter Text Analysis was originally published in Becoming a Data Analyst on Medium, where people are continuing the conversation by highlighting and responding to this story.

Zareen Farooqui | Becoming a Data Analyst - Medium | 2018-03-24 15:03:07

Photo taken from the top of a hill, overlooking light green ocean beaches

View from the Barra-Galheta beach trail, in Florianopolis, Brazil

My Outreachy internship with Debian is over. I'm still going to write an article about it, to let everyone know what I worked on towards the ending, but I simply didn't have the time yet to sit down and compile all the information.

As you might or might not have noticed, right after my last Outreachy activity, I sort of took a week off in the beach. \o/

Renata's picture, a white woman sitting on the grass, overlooking the beach below. She pets a brown stray that sits next to her

Mila, a cute stray dog that accompanied us during a whole trail

For the past weeks, I've also been involved in the organization of three events (one of them was a Debian Women meeting in Curitiba that took place two Saturdays ago and another one is Django Girls Porto Alegre, which starts tonight). Because of this last one, I was reviewing their Brazilian Portuguese tutorial and adding some small fixes to the language. After all, we are talking to women who read the tutorial during the workshop, so why all the mentions to programmers and hackers and such should mention the male counterpart in Portuguese? Women program, too!

When I was going to commit my fixes, though, I got an error:

remote: error: GH006: Protected branch update failed for refs/heads/master.
To https://github.com/DjangoGirls/tutorial
! [remote rejected]   master -> master (protected branch hook declined)

Oops?

Yup, as it so happens more often than not, I forgot to fork the repository before starting to change the files! I just did 'git clone' straight to Django Girls' tutorial repository. But, since I had already done all the steps towards the commit, what could I do to avoid losing the changes? Could I just push this commit to another repository of my own and try and open a Pull Request to DjangoGirls/tutorial?

Of course I had to go and search for that. Isn't that what all programmers do? Go and find someone else who already had the same problem they have to try and find a solution?

Quick guide to the solution I've found:

  • Fork the original repository to my collection of repos (on Github, just clicking 'Fork' will do).

  • Get the branch and the id of the commit that had been created. For instance, on this case:

[master 4d314550] Small fixes for pt-br version

The branch is: master

The id is: 4d314550

  • Use the URL for the new repository (your fork), the branch and the commit id for a new git push command, like this:
git push URL_FOR_THE_NEW_REPO commit_id:branch

Example with my repo:

git push https://github.com/rsip22/tutorial 4d314550:master

And this was yet another article for future reference.

Renata D'Avila | Renata's blog | 2018-03-23 03:49:00

Simple apps for complex emotions

Recently I made a couple apps with my friend Emily Theis based on podcasts by WNYC's Death, Sex and Money. This first one is breakupsurvival.guide, which is based on listener submitted breakup survival suggestions. Initially all of these suggestions were put together by Death, Sex and Money in a Google Sheet, but Emily noticed there was always other people looking at this Google Sheet along with her.

Simple apps for complex emotions
Are you feeling the same things I am, Anonymous Camel?

These anonymous comrades made Emily want to create a mobile-friendly website version of the Google Sheet. She created the frontend of the site, and I hooked up a copy of the suggestions Google Sheet as a very simple database/Content Management System.

I was interested in this project for a couple of reasons:

  1. I wanted to challenge myself to keep the code as simple as possible, especially after working with more elaborate build systems and frontend frameworks for a while.
  2. I had heard about people using Google Sheets as a CMS, but I hadn't explored that technique and was curious about what tools were available for it.
  3. The subject matter was something that a lot of people care deeply about.
  4. I could help my friend improve her coding skills.

The library I ended up using for this project is called Tabletop, and setting it up was really simple. I didn't even really need to do much beyond the example code:

 function getCardData() {
    Tabletop.init( { key: 'https://docs.google.com/spreadsheets/d/1ZqCUv_Ps0lHS0_I8Onk_xcdP9ThUS2ALtmxre5o7h5Q/pub?output=csv',

    callback: function(data, tabletop) {
      tabletopData = data;
      randomizeData(tabletopData);

      if($('.beating-hearts-baby').length) {
        $('body').removeClass('beating-hearts-baby');
      }
    },
    simpleSheet: true } );
}

And then I used the same column names as my HTML element IDs and injected the column content with the following code whenever the "Next Suggestion" button was clicked:

var suggestionData = data[dbRow];
var elements = ['type', 'name', 'suggestion', 'comment'];

// Grab the content and put 'er in
elements.forEach( function (el, index, elements) {
  elements[index] = document.getElementById(el);
  elements[index].textContent = suggestionData[el];
});

The one thing that was tricky was that at one point we accidentally had a second blank sheet as part of the Google Sheet we were pulling from, which caused an error and prevented information from being sent back to the app. This was because we told Tabletop to expect just one sheet (simpleSheet: true). Once I enabled debug mode however, the error message and the cause of the issue was super clear. I just didn't realize this handy mode existed for a while and so I wasn't seeing the error message.

The code for this app could be improved in a few ways (namely by caching the Sheet data, caching the HTML element reference so you don't have to get the elements by ID on each click, and adding a loading animation), but for a quick and fun project it worked well enough. We made an app-y website that a ton of people visited, that got some German press and that Death, Sex and Money were excited about! Plus another bonus: turning on Google translate for that German article introduced me to such gems as "grief hole."

Simple apps for complex emotions

In fact, DSM ended up liking the project enough to ask us to do another one for their Opportunity Costs series about the role class identity plays in our lives. In a similar vein as the Breakup Survival Guide, it shows randomized listeners' responses to the question, what makes you proud or ashamed of your class? You can find this app at http://dsmclass.community/

Simple apps for complex emotions

For this second round, I used localStorage in the browser to made caching improvements for both the Sheet data and also for the HTML elements:

cacheData: function( data ) {
    var scope = this;
    var dataToCache = JSON.stringify( data );
    var today = new Date();
    var cacheName = 'DSMclass' + today.getMonth() + today.getDate();
    var cachedData = localStorage.getItem( cacheName );

    // Only clear and create new cache if today's data hasn't been cached yet
    if ( !cachedData ) {
      localStorage.clear();
      localStorage.setItem( cacheName,  dataToCache);
    }

    return;
}

And I also created a service worker to make this website a little more of a Progressive Web App. This allows for the HTML and CSS of the page to be cached, which would allow this site to be available even if the user doesn't have internet access. One downside is that it makes site development a little hard because it caches styling, which would mean it would be a roadblock to Emily's frontend work since it wouldn't show any updates she made. I have the service worker's init line commented out at the moment, but now that this site is launched and stable, we'll uncomment it.

I'm pleased to say that we got a lot of positive feedback and visitors to this Class app as well. People tend to come and stay on these apps, even for up to a minute at a time, which seems like an eternity on the internet. It makes me really excited that a simple premise and website can be so powerful for users. It feels a bit like an art project in the style of Post Secret, which is also vehicle for people to anonymously share their innermost thoughts on taboo subjects.

Simple apps for complex emotions

Aside from the community response, I found these projects had a few other upsides as well. The tech stack is very simple thanks to the Tabletop library for Google Sheets, which made it easy to get started quickly and simply. I could see this being very helpful for anyone that wanted to get up a quick app prototype. Its simplicity also seemed like a bonus for Emily in terms of learning about the backend and for launching the site. I highly recommend using Tabletop and Google Sheets if you're a beginner looking to create an app without needing to deal with a complicated tool set.

Next up I'm working with a friend on an app to both randomly suggest Sex and the City episodes to watch and to track which episodes have been suggested. The random suggestion part is very similar to these previous two apps, but for sending information back to the Google Sheet (ie recording with episodes have been suggested), I'll need to go beyond Tabletop since it currently provides read-only data and doesn't allow sending information back to the sheet.

Katie Broida | Ruminations on Coding and Crafting | 2018-03-18 20:01:57

Configuring Travis CI to enable devDependencies incompatible with Node versions your application supports

“Wishes” (Glass Feathers with 24k Gold Leaf) by Kiara Pelissier

As part of the current Outreachy cohort, I am contributing to the Node.js client library for Jaeger, an open source distributed tracing system.

Recently, we ran into the problem of wanting to update our tooling in a way that incorporated dependencies that were incompatible with all the Node versions we support. Note: the Node.js client library for Jaeger is committed to supporting Node v0.10 and up.

Yes, Node v0.10 and v0.12 have end-of-life status. However, surveys suggest that 30 to 50 percent of Node.js commercial projects are using Node v0.10 or v0.12 in production. Jaeger, as a distributed tracing system, provides the ability to monitor and troubleshoot microservices-based applications. Jaeger’s Node.js client supports versions 0.10 and up so that projects using those Node versions in production can have the benefit of observability into their microservices-based distributed systems.

However, Jaeger’s current commitment to supporting Node v0.10 had a limiting effect on the dependencies that we were using in the Node.js client library.¹

To recap, the most common types of dependencies in every project are dependencies and devDependencies. Your package.json specifies your project’s dependencies:

{
"name": "my-project",
"version": "1.0.1",
"dependencies": {
"package-a": "^1.1.2"
},
"devDependencies": {
"package-b": "^1.5.0"
}
}

The packages specified in dependencies are the dependencies needed when running your code.

The packages in devDependencies are your development dependencies. These are the dependencies that you need at some point during your development workflow but not when running your code (e.g. a test framework or a transpiler).

Previously, we had Travis CI configured to build and test each version of Node that Jaeger’s Node.js client supports.² In practice, this meant that our dependencies and devDependencies had to be compatible with every version of Node the Jaeger Node.js client supports, including Node v0.10.

Then we decided to upgrade from babel-preset-es2015 to babel-preset-env.

Upgrading to babel-preset-env

For transpiling ES2015+ to ES5, Babel recommends using babel-preset-env. Without any configuration options, babel-preset-env behaves exactly the same as babel-preset-latest (or babel-preset-es2015, babel-preset-es2016, and babel-preset-es2017 together). You can also configure it to target certain browsers or runtime environments, and this will make your bundles smaller.

There were enough benefits that we decided to upgrade from babel-preset-es2015 to babel-preset-env. However, babel-preset-env targets node 4.

Our .travis.yml was configured so that Travis would build and test for each Node version the Jaeger Node.js client supported. The start of our .travis.yml looked similar to this:

language: node_js
node_js:
- 'node'
- '8'
- '6'
- '4'
- '0.12'
- '0.10'

Then we had a script that both built and tested each version of Node listed under node_js. The result was that upgrading our devDependencies to include babel-preset-env broke the Travis build for Node v0.10 and v0.12.

We needed to reconfigure our .travis.yml file so that we would be able to upgrade to newer versions of our tooling, and then, once the transpiling was done, run our tests in Travis with different versions of Node. In order to do so, we utilised the Travis build matrix:

language: node_js
node_js:
- '6'
matrix:
include:
- env: TEST_NODE_VERSION=0.10
- env: TEST_NODE_VERSION=0.12
- env: TEST_NODE_VERSION=4
- env: TEST_NODE_VERSION=6
- env: TEST_NODE_VERSION=node

If there is one version of Node listed under node_js Travis will interpret that Node version as it’s default version. We can then use the Travis default Node version as our build version for every set of tests, thereby enabling the use of devDependencies irrespective of whether they are compatible with every Node version we support. We also customise our tests to run against every version of Node included as an env within the Travis matrix. To do so, first we install the Node version we will test against and then switch to the Travis default version:

before_install:
# Install the Node version to test against
- nvm install $TEST_NODE_VERSION
# Switch back to build-time Node version
- nvm use $TRAVIS_NODE_VERSION

Then we build with the Travis default version of Node, and afterwards switch to the version of Node we are testing within our Travis matrix:

before_script:
- make build-node
- nvm use $TEST_NODE_VERSION
- node --version
- rm -rf ./node_modules package-lock.json
- npm install

Notice that we print out to our Travis CI logs the Node version that we’ll be testing. We then remove all the node_modules that had been installed during the initial build and, importantly, the package-lock.json that had been installed during that build. We then initiate another npm install while using our TEST_NODE_VERSION.

This worked great while updating from babel-preset-es2015 to babel-preset-env! However, soon we thought to add Prettier to the project.

Adding Prettier, Husky, and Lint-staged

Prettier: An Opinionated Code Formatter, Excellent for Open Source

I’ve written about Prettier and why we decided to add Prettier to Jaeger’s Node.js client here. To facilitate smoother collaboration, we decided to add a pre-commit hook using husky and lint-staged, which would ensure that all PRs would be formatted by Prettier using our chosen configuration.

Unfortunately, husky could not be installed on Node v0.10/v0.12. The Travis CI build status page is great for seeing at a glance if the build passes or fails and, in our case, for which versions of Node. When failures occur, reading the logs in Travis CI is fantastic to find out more information. In our scenario, we could see that our final npm install in our before_script would error out when trying to install husky on Node v0.10/v0.12. My first attempt to solve this problem was to place husky as an optionalDependency within our package.json:

"devDependencies": {
...
},
"optionalDependencies": {
"husky": "^0.14.3"
},
"scripts": {
...

Optional dependencies are just that: optional. If they fail to install, npm will continue with the installation process and, once completed, will consider the installation successful.

This is useful for dependencies that won’t necessarily work on every machine, or every version of Node your application supports. However, you do need to have a fallback plan in case the optionalDependencies are not installed. For Jaeger’s Node.js client library, this is not a problem as husky is a devDependency not necessary when running the application in production and our recommended Node version for development is Node 6.

To ensure we had no problem with our builds in Travis CI, for our final npm install in our before_script, we used a --no-optional flag, like so:

- npm install --no-optional

Our Travis CI tests all passed! Excellent.

Except, optionalDependencies are by default installed as dependencies not devDependencies. In other words, optionalDependencies are also production dependencies. Unfortunately, this meant that husky was being installed for consumers of the Node.js client library. What we needed was an optional devDependency.

More broadly speaking, for any package downloaded from npm, all production dependencies are installed, including optionalDependencies. Unsurprisingly, we weren’t the only ones wishing for optional devDependencies. The problem and possible solutions are discussed here.

Happily, Joe Farro, came up with an elegant, simple solution that works for Jaeger’s Node.js client. Husky and lint-staged are now both listed as devDependencies in the package.json; however, before the final npm install in the before_script, we remove husky and lint-staged from the package.json :

- npm uninstall -D husky lint-staged

For clarity, I’ll explain the components of the above command.-D is an alias for --save-dev, which when used with npm install would ensure that the listed packages would both be installed in the local node_modules folder and be listed as devDependencies in your package.json. As we are using it with npm uninstall it does the opposite, removing the packages from within your devDependencies in your package.json. When we run, in the next step, the npm install command, those packages are no longer listed in the package.json and so there will be no attempt to install them. The before_script in the .travis.yml file now looks similar to:

before_script:
- make build-node
- nvm use $TEST_NODE_VERSION
- node --version
- rm -rf ./node_modules package-lock.json
- npm uninstall -D husky lint-staged
- npm install

In the end, I’m really happy with the solutions we’ve come up with for enabling us to update the devDependencies for Jaeger’s Node.js client library, and hope that this explanation is useful for others!

Note: In an attempt to make this explanation of how we reconfigured our .travis.yml file more helpful to others, I have simplified the descriptions of Jaeger’s Node.js client library’s .travis.yml file. The Node.js client library for Jaeger is open source and the current .travis.yml file can be viewed here.

[1] The Node.js client’s current support for Node v0.10 and up also has an effect on which package manager we use. The Jaeger Node.js client uses npm, as yarn supports Node 4 and up.

[2] Travis CI is a continuous integration service used to build and test projects hosted on GitHub. For Jaeger’s Node.js client we have Travis CI run our tests on every pull request made to the GitHub repo. The .travis.yml file tells Travis the programming language for your project and how to build it. Any step of the build can be customized.

Travis CI is free for open source projects on GitHub. ❤️


Configuring Travis CI to enable devDependencies incompatible with Node versions your application… was originally published in JaegerTracing on Medium, where people are continuing the conversation by highlighting and responding to this story.

Kara de la Marck | Stories by Kara de la Marck on Medium | 2018-03-18 14:30:25

My struggle with Reader’s Block and what it taught me.

When I was 6, my aunt gifted me a Panchatantra book (childhood stories). I was delighted. I’ve never looked back since.

When I was 11, I was introduced to the wizarding world of J.K. Rowling. So what if I never made it to Hogwarts! What followed were Twilight, Percy Jackson, The Long Walk to Freedom and a long list of books.

When I was 14, I finished off a chapter on Anne Frank from the textbook, even before our English class resumed. I got a copy of the book from the library and finished it. I secretly wished that I could be as natural a writer as her.

On joining high school, I rushed to get a copy of Breaking Dawn from the library on day one. Oh Thank God for the library! Once, I stayed overnight to finish Angels and Demons too. Ah the memories!

When I was 18, Paulo Coelho became my favorite writer after reading Brida. Surprisingly though, I could never finish The Witch of Portobello. “Maybe it’s the coursework”, I consoled myself.

When I was 19, I didn’t read beyond a few pages of Life of Pi, only because my friend shared a negative review. I had got a perfect excuse to not read it.

Post 19, I experimented with semi-autobiographies and post colonial literature (as a part of my coursework). The God of Small Things, The Inheritance of Loss, Wise and Otherwise and what not. Despite this, my reading speed had decreased. I stopped enjoying reading too. I would skip lines and paragraphs, forget past references in the story and sometimes, hardly focus on what I was reading. I would go to the library, issue books and never read them. Reader’s block is a thing 😞. The mad rush to build careers, the rat race and the coursework left me with no time to read.

“Reading”, I proudly answered when someone asked my hobby. Little did they know I was referring to course books, developer and software blogs and tutorials.

Lucky for me, I was soon blessed with a relatively free semester and I made it a point to start reading again. Sudha Murty, Paulo Coelho, Arundhati Roy and many more. Frequent library visits . Since then, nothing much has changed. My hectic day has some time reserved for reading. I had survived Reader’s Block.

When you have a hobby, don’t give up on it. Like all the good things in life, hobbies are hard to find and even harder to keep. Reading has become more fun after the dry phase. Reading will always be a constant.

Reader’s Block is a thing! was originally published in Be Yourself on Medium, where people are continuing the conversation by highlighting and responding to this story.

Gauri P Kholkar | Stories by GeekyTwoShoes on Medium | 2018-03-17 07:13:38

I came across Mercurial when I first started contributing code to Firefox. Mercurial is a version control software like Git, CVS, etc. The Firefox code base is version controlled using Mercurial which means that if you’re interested in contributing code to Firefox, it’s a good idea to get acquainted with Mercurial first.

In this blog post, I talk about a few Mercurial commands with a focus on Firefox development. This should help you get started with Mercurial quickly and save you the time spent in figuring out what to learn on the web.

If you’ve built Firefox on your machine using the bootstrap.py file, you should already have Mercurial installed and configured for Firefox development.

So, here are a few commands to help you get started:

1. hg diff

This command is used to see all the changes you have made to different files in the code base after you’ve saved the changes. It displays the file names along with any uncommitted changes you have made to a file.

2. hg wip

What a ‘hg wip’ output looks like

This command gives you a tree view of your work in progress. It helps keep track of all the changesets you have created while working on different bugs. A changeset in Mercurial is a collection of changes to files in the code base. It is identified using a changeset ID. A changeset can be created by committing your local file changes.

3. hg commit

This command is used to consolidate all your different file changes into a changeset. I suggest using this command with the -m flag if you don’t want to enter into editor mode and get stuck trying to figure out how to exit vi or vim. Here’s how you can do that:

hg commit -m”commit_message_goes_here”

or

hg commit — amend -m”new_commit_message_goes_here”

The ‘amend’ option is used to update a changeset.

4. hg up <changesetID>

This command is used to navigate between changesets when you’re working on many bugs at once. It can be used with the ‘hg wip’ command to switch between different changesets.

5. hg push review

This command is used to submit a changeset for review. You can submit a changeset for review by first navigating to that changeset by using ‘hg up <changesetID>’ and then using ‘hg push review’. Before you can submit your patches for review using ‘hg push review’, you have to configure your machine to use MozReview.

6. hg export

This command is used to generate a patch file. You can save the generated patch file by doing:

hg export > file.txt

However, if you’re using MozReview, it’s not necessary to generate a patch file. In that case, ‘hg export’ can be used to simply view the patch file in your terminal.

7. hg pull central

This command is used to pull the latest changes that have landed on central onto your local machine, between when you last updated your local code base and now.

8. hg rebase

This command is used to take a changeset that has been committed on top of an older version of your local code base and put it on top of the latest version you have pulled. It is usually used after ‘hg pull central’ to place a changeset on top of the latest changes. Here’s how you can use it:

hg rebase -s <changesetID> -d central

-s flag notes the source changeset ID and -d flag notes the destination changeset ID. Mercurial tags like ‘central’, ‘tip’, etc. can be used in place of changeset IDs.

9. hg add <path>

This command is used to add new files or folders to the repository.

10. hg remove <path>

This command is used to remove files or folders from the repository and also erase it on disk.

11. hg forget <path>

This command is used to remove files or folders from the repository without actually erasing it on disk.

That is it for this blog post. I hope that this blog post has given you a good enough idea about Mercurial to get started with Firefox development. Happy coding!

Prathiksha G | Stories by Prathiksha G Prasad on Medium | 2018-03-14 16:48:08

In this series of blog posts, I hope to shine light on some of the existing neural-network based techniques for reconstructing the three-dimensional model of an object given its two-dimensional image counterpart. Along the way, I’ll review some landmark papers that tackle this problem in novel ways. Before we go into the details, here’s why I think this problem is really worth getting right.

What is 3D reconstruction and why is it important?

When we look at a flat two-dimensional image of an object, we as humans, cannot help but perceive its 3D structure. Although we view this world through 2D projections of 3D objects, from numerous previous examples, we are able to build a very good 3D model of the objects around us. This ability is central to our perception of the world and the manipulation of objects within it. Extending the above idea to machines, the ability to infer the 3D structure from single images has far-reaching applications in the field of robotics and perception in tasks such as robot grasping, object manipulation, etc. The 3D geometry of an object plays a pivotal role in analyzing the object dynamics and could greatly improve robot-object interactions. Beyond the field of robotics, consumer-facing applications such as virtual-reality/augmented-reality for online shopping, gaming, etc could greatly benefit from 3D reconstructions that are accurate yet efficient.

Now that we have established the importance of working on this task, we will go ahead and look into some of the challenges posed by it. First and foremost, 3D reconstruction from single-view images is an ill-posed problem. Consider this image of a chair taken from its back-view in Figure 1a. If you were tasked with the job of building a 3D model of it, what would it be? Can you come up with more than one 3D model that is consistent with this particular viewpoint? Looking at Figure 1b, it is evident that multiple models are capable of satisfying the 2D image counterpart. This property evokes the sentiment that we can never really have a unique groundtruth for any input image. This calls for building neural network models that can factor in a degree of variability in the generated predictions. As will be discussed in the upcoming blogposts, Fan et al. [\cite] use a random variable input in addition to the input image as a mechanism for including this variability in the network predictions.

The second point of concern is the choice of data format used for representing the 3D model. Unlike 2D images, whose most popular representation format is the pixel space, data representation in 3D offers more choice and necessitates a smart selection according to the intended application. In the following section, I present a non-exhaustive list of possible representation formats.

3D Representation Formats

Voxel

In simple terms, a voxel is just the 3D equivalent of a pixel. A pixel occupies space on a 2D plane, whereas a voxel occupies space in a 3D volume.  Although a voxel is the most direct representation format for 3D data, it is riddled with a number of shortcomings, the most important ones being efficiency and scalability. Consider the same car in Figure 1, whose 3D model we want to save in a voxelized format in a grid of size 643 . A voxelized grid representation is a 3D occupancy grid where every voxel indicates the presence or absence of the object. You may notice that all the information related to the shape of the car is present on the surface itself. The voxels inside the surface add no new information to the body shape (unless, we are aware of the car’s contents). It doesn’t help that the voxel grid has cubic complexity with respect to the grid length. In other words, this format of is a dense representation format that unnecessarily increases the data complexity when sparsity is desired. Coming to the second shortcoming, a voxel representation scales poorly. This simply means that as the resolution of the voxel grid increases, the ratio of surface voxels to inner voxels decreases. This implies that a high resolution is more inefficient than its low res counterparts despite capturing finer details.

Although the voxel grid has a number of drawbacks when it comes to storage of 3D data of everyday rigid objects, in some other areas such as medical image analysis, a voxelized representation is the only one that can capture the information we wish to store. Consider for example a CT scan of a patient’s torso. How might we store this data? Notice that this data has more meaningful information within the surface of the body rather than on it. In such cases, a voxelized format is the way to go! Every voxel in this case usually stores an intensity value (different from a binary occupancy). There is currently a lot of progress going on in the medical imaging field with respect to analyzing such 3D data. Competitions such as Kaggle Lung Cancer Detection Challenge have a number of top-winning solutions that are deep learning based. Neural nets for improving medical diagnostics. Way to go!

 

Octree

This data format comes to the rescue of voxelized representations by making use of a data structure called an octree that hierarchically stores details in the form of coarse to fine 3D blocks (equivalent to a LEGO set).

 

Point cloud

 

Mesh

 

Volumetric Primitives

Priyanka Mandikal | Code and Curry | 2018-03-13 19:38:21

Marcela: I am not certain that teaching a large class would do what I’m wanting to do. The people I most want to help get into UX are also the people least likely to be able to afford to take UX courses.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2018-03-07 16:18:54

I was an Outreachy intern in the past and I know how nerve wracking the application process can be. Hence, I have decided to aggregate blog posts from past Outreachy interns describing do’s and dont’s as well as tips for application process.  These can be helpful if you are applying from Google Summer of Code too. Hope this helps!

Posts from past Outreachy interns with Fedora:

Alisha Aneja

Bhakti Bhikne

Suzanne Hillman

Some other interns : miguel

Posts from past Outreachy interns with other organizations:

Neha Jha, Wikimedia

Mansimar Kaur, Kinto

Bee, Mozilla

Zareen, Wikimedia

Anjana Vakil, Mozilla

Princi, Mozilla

More Outreachy related posts on Medium by past interns

View story at Medium.com

P.S. Some of these the things mentioned might have changed due to updates in rules and application process for the current Outreachy round. Even then, I think these should be useful.

Bhagyashree Padalkar | networks for data | 2018-03-07 00:28:03

This is the text of the crowdfunding campaign I am organizing with five other extraordinary women: Alice, Ana Paula, Anna, Luciana and Miriam.

Women in MiniDebConf. Let's show that, yes, there are many women with potential that are interested in the world of free technologies and who use Debian! Help with the campaign for diversity at MiniDebConf: https://catarse.me/mulheres_na_minidebconf

Here is what Anna has to say about Debian: "Debian is one of the strongest GNU/Linux distributions with the free software philosophy, and that's why it's so impressive. Anyone who comes in contact with Debian necessarily learns more about free software and the FLOSS culture."

The Debian project provides travel grants for participation in conferences - for people who are considered project members.

And how many of these are women? Very few.

Women interested in attending MiniDebConf Curitiba are many, but most of them do not have the means to travel to Curitiba, specially in a big country like Brazil.

It is a fact that women do not have the same opportunities in the IT world as men, but we can change that history. For this, we need your help.

Let's show that, yes, there are many women with potential interested in the world of free technologies and who use Debian. At MiniDebConf, women who could already be contributing to the community will have an opportunity to interact with it, taking part in tutorials, workshops and talks. It is in everyone's best interest that the community get itself ready to include them.

So, our way of helping to increase diversity in MiniDebConf - and perhaps among the people who contribute to Debian as well - is by giving these women the conditions they need to participate in the conference.

The Debian Women community is well developed in other countries, but in Brazil there were still no registered groups.

At last year's MiniDebConf, there was not one single woman speaker.

But it does not have to be this way.

To be able to increase diversity and to change the current situation of exclusion that we currently have in the Brazilian community, we must act on many fronts. We are already working to foster the local community and to engage other women in the use and development of Debian.

That is why we want to also bring in women who are already Debian users, so they can share their experiences, so they can act as mentors to the newbies and so we can integrate all of them into the Debian development community.

There have already been successful campaigns in Brazil to include women in conferences and technology communities, both as participants and as speakers: PyLadies in FISL, PyLadies in Python Brazil 12, PyLadies in Python Brazil 13 and the Gophercon BR Diversity Scholarship.

With your collaboration, this will be another goal achieved - and the Debian and free software communities will become a bit more representative of our own society.

Bitcoin - 15YFYKHr6CfYmBCyf4JM2g8WFkCmNGDGi5

Women in MiniDebConf. Let's show that, yes, there are many women with potential that are interested in the world of free technologies and who use Debian! Help with the campaign for diversity at MiniDebConf: https://catarse.me/mulheres_na_minidebconf

Link to the campaign: https://www.catarse.me/mulheres_na_minidebconf

Renata D'Avila | Renata's blog | 2018-03-05 15:49:00

Recently Pranav Jain and I attended Bob Conference in Berlin, Germany. The conference started with a keynote on a very interesting topic, A language for making movies. Using Non Linear Video Editor for making movies was time consuming, ofcourse. The speaker talked about the struggle of merging presentation, video and high quality sound for conferences. Clearly, Automation was needed here which could be achieved by 1. Making a plugin for non linear VE, 2. Writing a UI automation tool like an operating system macro 3. Using shell scripting. However, dealing shell script for this purpose could be time consuming no matter how great shell scripts are. While the goal to achieve here was to edit videos using a language only and let the language get in the way of solving this. In other words a DSL Domain-Specific Language was required along with Syntax Parse. Video (https://lang.video/)is a language for making movies which integrated with Racket ecosystem. It combines the power of a traditional video editor with the capabilities of a full programming language.

The next session was about Reactive Streaming with Akka Streams. Streaming Big Data applications is a challenge in itself by ensuring there is near to real time processing, i.e there is no time to batch data and process later. Streaming has to be done in a fault tolerant way, we have no time to deal with faults. Talking about streams, they are two types of streams Bounded and Unbounded! Bounded streams basically mean that the incoming stream is batched, processed to give some output whereas an Unbounded streams just keeps on flowing… just like that. Akka Streams make it easy to model type-safe message processing pipelines. Type-safe means that at compile time, it’s checks that data definitions are compatible. Akka streams has explicit semantics, which is quite important.
Basic building blocks for Akka streams are Sources (produce element of a type A), Sinks (take item of type A and consume A) and Flow (consume element of type A and produce elements of type B). The source will send data via the flow to the sinks. There are situations where data is not consumed or produced. Materialized values are useful when we, for example want to know if the stream was successful or not, result of which could be true/false. Another concept involved was of Backpressure. When we read things from file, it’s fast. If we split that file based on \n, it’s faster. If we want via http from somewhere, it can be slow due to net connectivity. So what backpressure does is that, any component can say ‘wooh! slow down, I need more time’. Everything is just as fast as the slowest component in the flow, which means that slowest component in the chain would determine the throughput. However, there are situations when we really don’t want to/ can’t control the speed of source. To have explicit control over back pressuring we can use buffering. If many requests are coming and reaches a limit, can set a buffer after which the requests can be discarded or we can also push the back pressure upstream when the buffer is full.

Next we saw a fun demo on GRiSP, Bare Metal Functional Programming. GRiSP allows you to run Erlang on bare metal hardware, without a kernel. GRiSP board could be an alternative to raspberry pi Or arduino. The robot was stubborn however, interesting to watch! Since, Pranav and I have worked on a Real Time Communications projects we were inclined towards attending a talk on Understanding real time ecosystems which was very informative. Learned about HTTP, AJAX polling, AJAX Long polling, HTTP/2, Pub/Sub and other concepts which were relatable. Learned more about protocols/ layers in the last talk of the conference, Engineering TCP/IP with logic.

This is just a summary of our experiences and what we were able to grasp at the conference and also share our individual experience with Debian on GSoC and Outreachy.

Thank you Dr. Michael Sperber for the opportunity and the organizers for putting up the conference.

Urvika Gola | Urvika Gola | 2018-03-02 13:24:38