The kernel development cycle has evolved so beautifully overtime that it has set an example in the open source world. Having contributed to the kernel I actually enjoyed learning about the whole development cycle. Terms like mainline kernel, rc, stable release, long-term support confused me a lot initially but with time I understood at least the basic work-flow.

Keeping the mind the volume of code that sits inside the kernel, it is very difficult for a single person to inspect each and every part of the project perfectly. Hats off to Linus and people like Greg. To make the process easier (it looks easy :P), the kernel is broken down into subsystems with each subsystem having its own main developer or as generally said top level maintainer. These maintainers decide which patch goes to the mainline kernel. Along with these top level maintainers there are many file system and driver maintainers who review the patches before the subsystem maintainers accept it. These maintainers can also send a pull request to the subsystem maintainer if the subsystem is again divided into sub-sub-systems. And that’s the reason the whole kernel is built on chain of trust. Finally all the patches collected by the subsystem maintainers goes to Linus in a pull request. So, to get your patches into the kernel send it directly to the maintainers and this is when the get_maintainers.pl script comes handy.

Subsystem maintainers collects patches ahead of time and send them to Linus when the merge window opens. To explain more technically, let’s assume that the development cycle for version 5.1 has ended and it has released. As soon as 5.1 is released a new development cycle for 5.2 version starts and the merge window opens. Merge window is the starting point of the development cycle where the code that is considered to be sufficiently stable gets merged into the mainline kernel by Linus. All the new features for the next kernel version goes in this merge window. This is usually a two week process and the maintainers send PR to Linus and Linus merges them to the mainline kernel. But this is not how a new version gets released.

At the end of two weeks Linus declares that the merge window has closed and now the process of stabilizing the mainline kernel begins. At this point some features are still untested and unstable and some bug fixes have to be made to prepare the kernel for the next stable release. Closing the merge window means that now is the time to start releasing the -rc kernels. Over the next 8–9 weeks, the developers work continuously on the -rc versions. Every week a new -rc is released with the name 5.2.rc1, 5.2.rc2.. and so on. These -rc versions are improvements or bug fixes to the mainline kernel that got built during the merge window. During these 8–9 weeks only patches including some improvements goes to the mainline. No more features are added after merge window closes. Finally after the end of these -rc weeks the kernel is stable and ready for release as version 5.2. So, the whole development cycle is a matter of around 10–12 weeks and we get a new version in every three months.

The whole cycle goes like : (Sorry for not giving a graphical representation.)

5.1 released → Merge window for 5.2 opens for 2 weeks→ Changes staged for 5.2 goes to the mainline kernel → Window closes and unstable 5.2 is ready for fixes → Week 1 (5.2.rc1) → Week 2(5.2.rc2) → …. → Finally 5.2 released after around 10–12 weeks.

Note:

The merge window is a very busy time for the developers and they avoid new non-urgent patches due to lack of time during these 2 weeks. I have encountered one case myself where the developer told me not to send non-urgent cleanup patches during this time. During the merge window they concentrate only on the stuff for the upcoming rc1 and these patches might annoy them. But they are happy to take them as soon as the -rc1 week starts.

Happy Linux coding :)

This year Bazel has given me an invaluable opportunity to work with them as a Google Summer of Code 2017 participant and I am very excited to share this information. I will be working on project titled “Code completion for .bzl files” under guidance of Mr. Laurent Le Brun. The aim is to create a standalone binary that provides code completion for Skylark files (.bzl). Input of the binary is a .bzl file and a location. Output is a list of possible completions. The main use-case is to provide completions for editors and IDEs.

Main idea is to adopt Microsoft Language Server Protocol to integrate code completion/auto-complete. By adopting the protocol we can also have features like goto definition, find all references and alike in future. More info here.

I am both excited and humbled to announce that I was accepted into Outreachy Round 14! I will be working with my mentor Randy Barlow (bowlofeggs) to improve Bodhi: the web-system that publishes updates for Fedora. We will hash out the plans in greater detail next week, so I’ll post an update after that meeting. As of now, the plan is to make improvements to the Bodhi CLI — more specifically, command completion.

If you’re thinking, “Okay, wait — what’s Outreachy?” Let me give you a brief summary: Outreachy is a wonderful internship opportunity for underrepresented groups in FOSS. Its primary goal is to encourage underrepresented groups to become FOSS contributors. An Outreachy participant is paired with a kmentor (supplied by the FOSS organization) to work on a project. The projects range from Linux Kernel development to designing a coloring book. There’s truly a project for everyone. Outreachy is hosted by the Software Freedom Conservancy, with help from RedHat and the GNOME Foundation. If you’re interested in learning more–especially if you’re a member of an underrepresented group in FOSS–you can find more information here.

How I came to apply to Outreachy
I’ve been mustering up the courage to apply for Outreachy for over a year. During rounds 11-13 I ended up with intense feelings imposter syndrome a day or two before the deadline, and thus never submitted an application before Round 14. This round, I started and powered through the application tasks only a few days before the deadline (no time for imposter syndrome to set in). I reminded myself that It’s only impossible to get in if you don’t apply and that internships are meant to be learning opportunities. I held my breath through the application process and got in. (I have to keep reminding myself because I’m still in shock, :D.) I will outline my initial contributions (i.e., application tasks) below.

Initial contributions
My first contribution to Fedora was also my first FOSS code contribution. This contribution consisted of fixing a bug in Bodhi. When an update reaches the required time in testing, Bodhi comments to alert the user that it can be pushed. The user can edit this update anytime, including after Bodhi comments that the update can be pushed. After a build is updated, the karma counts and the number of days in testing are reset, and thus the ability for the update to be pushed is also reset. If the karma or days in testing threshold is reached, the user should yet again be able to push the update. Bodhi should comment to notify the user of the recent ability to push. The error occurred when Bodhi did not post a comment notifying the user that the update could be pushed if Bodhi already commented that the update could be pushed prior to the karma (and thus, days in testing) reset. This error occurred because the method met_testing_requirements() determined whether the testing requirements were met based on Bodhi’s push approval message prior to the recent edit. I fixed this error by creating a new method to only retrieve the comments since the most recent update. The methods that determine the amount of karma and whether the testing requirements were met used the new method I created instead of simply retrieving all comments. I also wrote a test to ensure that this type of error won’t occur in the future. My initial contribution can be found here: https://github.com/fedora-infra/bodhi/pull/1396. The commit can be found here: https://github.com/fedora-infra/bodhi/commit/ceaf533d9a2dd0dd3496a2c12030689956f453a7.

My second code contribution was also to Fedora’s Bodhi. This contribution entailed sharing the –notes-file parameter between the new and edit CLI commands (i.e., a new update and an edited update). This allows the user to specify a file in which the update’s notes are contained. First, this required adding the option of using the –notes-file parameter in the CLI. Now that the user can use –notes-file as a parameter, the –notes-file parameter must actually retrieve these notes from the notes file. In order to achieve this, the file would need to be opened, read, and placed into the notes variable. This required making a function to get and/or process the notes, which was aptly named “_get_notes”. Such a change also required substantive testing. I created a test to ensure that the –notes-file parameter would be properly handled when creating an edit request. I created another test to assert that providing both the –notes-file and the –notes parameters to an otherwise successful updates edit request would result in an error. Finally, I placed a note in the man pages concerning this change. You can find this contribution here: https://github.com/fedora-infra/bodhi/commit/305f9aa8663ede691ccdd10cef005124276cbd21.

My most recent contribution was my third contribution (also to Fedora’s Bodhi). I have ensured that when an exit code of a process is zero, the standard error (if any) will be logged at the info level on the bodhi logger. If the exit code of a process is non-zero, the standard error will be logged at the error level on the bodhi logger. This was in response to a bug that resulted when emails were sent containing standard error output of processes with a zero exit code. Note that standard error does not always indicate an error has occurred — only when there is a non-zero exit code can we be sure that an error has actually occurred. I also wrote tests to ensure that this bug will never reoccur. This contribution can be found here: https://github.com/fedora-infra/bodhi/pull/1417.

General advice to prospective interns as an incoming intern:
If you’re a prospective Outreachy applicant, and if you find the application process a bit daunting, you’re not alone. But hey, you’ve got this! Take a few deep breaths and remember: the only way you absolutely cannot get in is if you don’t apply! Your mentors and the community are there to help you, so don’t be afraid to ask any questions. The program is aimed at newcomers who want to make their first contributions, so please remember that if you begin to feel inadequate in any way. I know that it can feel impossible if you start comparing yourself to the people who have committed over a million lines of code to a project. Always remember that they had to start somewhere!

If you have any questions about my experience with the application process, want some pointers or just a confidence boost, PLEASE don’t hesitate to contact me. If I can get accepted, you can too!

I’m trying to keep myself in the UX game, which is complicated by the grand jury thing lasting through the end of June and making contracting positions difficult to do right now. My mentee (who is really only slightly behind me in her UX path) commented that she’ll be looking to work with non-profits and government sites such to get more experience.

She pointed out https://www.catchafire.org and https://www.taprootplus.org/ as possible places to hunt through. A quick glance at catchafire when we were co-working to get an idea of what possibilities there were, and I saw something called a ‘website audit’ for the American Cancer Society.

“Huh,” says I. “What is a website audit?”

Looking at their list of needs, I see that they want a report that includes an outline of the organization’s goals for the site, feedback on the current site’s UX in general (they give specifics, but I think my summary is accurate), and recommendations for improvements to help achieve the desired goals.

### Looks like UX research!

Ok, interesting. This sounds a lot like UX research, mostly. With some prep work to figure out what they mean by ‘organizational goals’ and who their users are. Given that these goals could as easily mean ‘goals that someone in the organization thinks sounds good’ as ‘goals that were researched throughout various facets of the organization as well as users’, I think that’s going to be one of the first things I want to figure out.

They already have a site and some goals, which at least means that I should be able to work with them to figure out whose goals those are, and how specific they are. I’m not entirely sure what they mean by ‘built out site’, although I suspect they may mean working prototype. I don’t _think_ they mean the existing site?

They specifically list millennials as a target market, and I’m really curious as to what they actually mean by that. They say they want to engage young professionals, which is… rather non-specific, but certainly better than nothing.

### What do I want to find out?

I’ve got a call with them Friday morning to see if we’re a good fit, and I’m going to focus on what they mean by organizational goals and who their actual and prospective users are. I strongly suspect that my QE experience will combine nicely with my UX research experience for this particular project, so we shall see. And hey, it’s a good cause!

I’m also noting that they suggest up to nearly half-time work hours for a month and a half. This may be too many hours what with three days eaten by jury duty, but we’ll see.

I do hope it’s not an SEO website audit (which is mostly what google says about websites audits) as I’ve no strong sense of those, but given that SEO shows up nowhere on that volunteer job description I doubt it. Worst case, I adjust my thinking and do some online research to see what’s relevant. This looks useful in that case!

Recently I had the opportunity to be a part of Ultrahack as a mentor which was held in Helsinki, Finland.
Ultrahack is a combination of hackathon and startup accelerators.
As a mentor, my role was to ensure that each team have best possible chances of fulfilling the evaluation criteria for the contest. I also helped teams with the development and pitching.

It was a very exciting place to brainstorm life changing ideas and convert those ideas into working model. I met so many amazing developers who were building cool stuff. There were a few open source developers and student open source developers like me!

Being a Debian contributor, I spread the what Debian is all about. What makes it the best linux distribution. I talked to students about various programmes like GSoC and Outreachy, Debian participates as a mentoring organisation.  I also described my role as a GSoC student under Debian and the free-RTC project I worked under. Many female developers were interested in the Outreachy programme, I described the projects that Debian has currently under the Outreachy programme.

During the hackathon period, I talked to people about the upcoming annual DebConf which takes place. I  informed them that they can still apply as a speaker or for diversity bursaries and about the logo-making competition.

Thank you Debian for always supporting me!

When I was selecting my JavaScript learning companion, I looked through many reviews of most popular books like “JavaScript: The Good Parts”, Jon Duckett’s book, “Eloquent JavaScript” and others. I ended up choosing “Eloquent JavaScript” because it had code sandbox, and was free and short.

However, I don’t recall reading a review on this book from someone who finished it. So I am now filling that gap in hopes that it will be of use to JavaScript beginners.

The book seems short, but it took me quite a while to finish, mostly because of the challenging problems, many of which require hours to solve. I saw many people complaining about them. I, however, think that nobody learns by doing easy stuff; challenges also keep the pace slow enough for the material to be properly digested. You will be introduced to some important concepts like linked lists, recursion, graphs, OOP and functional programming, some of the famous math games and algorithms, along with web technologies, HTTP requests and AJAX, asynchronous programming and many more.

The best chapters are project chapters, where the author walks you through designing and implementing actual programs using all those techniques he teaches. These chapters came in handy for me during interviews, the most prominent example being chapter 7 — during one of the interviews, I was asked to design a very similar game).

I’d say if you plan to make development your job, this is the book to go. It’s not a reading book, though. My advice would be to write stuff down from the book, type in (not just copy-paste) all the projects along with your comments for each code line, and most importantly, do all the problems. “Eloquent JavaScript” requires a lot of time and effort, but it’s well worth it!

This weekend, April 28-30, people coming to Penguicon in Southfield, Michigan can catch a number of sessions of interest to Geek Feminism readers.

Coraline Ada Ehmke is one of the Guests of Honor (her Penguicon schedule). Ehmke “is a speaker, writer, open source advocate and technologist with over 20 years of experience in developing apps for the web. She works diligently to promote diversity and inclusivity in open source and the tech industry.” She and others are participating in a Women in Tech panel and Q&A on Saturday.

Perhaps I’ll see you at the con! Feel free to comment if you’re going to be there and mention any parties or sessions you’re particularly looking forward to.

this is a very simple flask app that pairs lines from two corpuses: white supremacist richard spencer’s twitter followers’ twitter bios and an essay by james baldwin, “letter from a region in my mind”

i was working with the twitter bios earlier in the semester when i made an executive order generator. here were some sample bios:

PC Gamer. Social Democrat. Egalitarian. Admin at @TheDPWiki. I revel in your butthurt.

Biblical Christian and anti-corporate Libertarian. Jesus is my Good Shepard not the State or Allstate and we are definitely in the last days.

FutureDanger is an American news link Heatmap Dashboard. Achieve situational awareness of multiple threat indicators in one view. Be prepared. ΜΟΛΩΝ ΛΑΒΕ

The Donald and #MAGA to save The Merica. Kek Meme generation and Ody cat integration.

i was struck by them: their rhythm, the way they reference an in-group with a shared vocabulary, the way they express ideology in buzzwords. the content of them is interesting, too: common themes are, predictably, whiteness, americanness, christianness, a certain kind of masculinity. they are provocative in predictable ways.

initially, i was going to generate new bios with a markov chain. markov chain projects, as allison helped me understand, highlight the ridiculousness of a corpus. markov chains say, “these words make so little sense that a program that primarily calculates statistical probabilities can reasonably replicate them.” the effect is light, silly. these bios are ridiculous, but they’re not light and silly. i wanted to highlight something else about them.

james baldwin’s writing is the opposite of these bios. putting one of his sentences next to one of their sentences contextualizes both. why do the bios say so little? why in that meaningless way? what do they never say? never own? why are they so angry? what do their writers think has been stolen? baldwin tells us.

the code

first, i got spencer’s followers twitter bios: https://gist.github.com/kaganjd/9a00be3932a2485abca047bc6733146e

then, i looked at baldwin’s essay: https://gist.github.com/kaganjd/5e8c2120713ccb3cc737e81c18575c0b

then, i picked out the lines from each corpus that had the word ‘white’ and printed them together: https://gist.github.com/kaganjd/36c455e95502102b2d72fb36c25f1e05

finally, i used all that to write a server: https://github.com/kaganjd/whiteness/blob/master/server.py

Overall, I loved it. Sure, there were annoying bits, but there are always annoying bits no matter what the job is.

### Good things

There were lots of good things!

First, I had the best mentor. It helps that I already knew her, and that she offered to be my mentor when she suggested Outreachy to me. She’s been unfailingly helpful and kind, and very supportive.

At the very start, she offered me the choice of working on small scale existing UX tickets in Fedora, or doing a full-fledged project. The former would have been easier, in some ways, but not nearly as useful for progressing my career. The former would probably have been easier, if less useful, for her as well.

Second, the fedora-hubs team is a good group of people. Welcoming, helpful, and unfailingly polite. I may have only been there for a few months, but I will miss them.

Fedora people as a whole were similarly helpful; I had nothing to offer my interviewees and participants but my goodwill, and everyone I asked was happy to help out when they were able.

Third, the task was an interesting one. I think at this point I’d probably describe Fedora Hubs as a whole as an interface that consolidates and filters information about and from many different places so that an individual can find what’s important or interesting to them within Fedora. I probably need to throw something about making it easier for new Fedora users to get involved, although it’s hard to say if that’s Hubs as a whole or specific to the Regional Hubs that I was working on. Or both! Probably both.

I’d say the overarching goal for Regional Hubs was to encourage and support community within Fedora. Some of the problems that we were trying to solve were as simple — but not easy — as helping new users more easily get involved with the Fedora community, encouraging in-person social interaction to help people become and remain connected, and helping people find each other and events. Some of these we knew were problems ahead of time (like new users getting and staying involved), and some came up during the interviews (finding people and events).

As some of you likely saw while reading along, locations are hard. This made for a very interesting discussion to figure out how we wanted to handle that, and there are still aspects of it that I suspect need more attention. However, if we want people to be able to find people and events near them, locations are also really important.

I most enjoyed the discussions in which we were exploring the bounds of what we needed to know or do. This included brainstorming in general, the aforementioned complications around locations, and the conversation around the feasibility of the mockups in which we touched on how Hubs might suggest new regional hubs.

### Neutral things

I didn’t really get a chance to learn more about visual design and how to translate from a mockup to a higher fidelity design. This was as much about available time as the difficulty of explaining it. I do have an example of the before and after versions of this for one of my mockups, and Mo has sent a screencap of creating mockups in inkscape. Hopefully these will be useful!

I didn’t finish creating the CSS for the high fidelity visual design that Mo had already created. I got stuck on translating from table to div, and needed to focus my attention elsewhere.

### Less good things

First, I really don’t like working remotely. I like people, and having people around is good for me. I also like being able to talk to people about what I’m working on and have them already have the context and knowledge to have productive conversations. This is still possible remotely, but there’s something missing from it in that context.

Second, and relatedly, I feel like remote usability tests and interviews are not as good. They do the job, for sure, but I feel like I missed out on stuff by not being _there_ with the participants. This is likely not helped by the connection to some of the locations participants were in being slow or intermittent.

Unfortunately, I was not able to do any local, in-person usability tests due to snow and other troubles.

This may actually be showing my bias from having done psychology graduate work: all our participants were in-person.

Third, transcription of interviews and usability tests are _annoying_ and really time and brain-power consuming. I knew this already, from my work with video and audio of people’s interactions with robots and with each other.

On the plus side, interviews and usability tests have less content to deal with, since I don’t need to identify and describe every gesture and every word spoken. Nor do I need to parse through 32 different recordings to try to find and appropriately label the right data to plug into statistical software to find patterns.

Fourth, Git and github and pagure have a higher learning curve than I’d like. This is not helped by the need for ssh keys in all sorts of places. I still wish it were possible to put my public key in _one_ place and have all the tools needed in Hubs work use it. A lack of communication between tools is a very common problem in all sorts of industries, and not just around ssh keys.

Fifth, having my internship include Xmas and New Years early on meant that I was rather less productive than I’d have liked around then. I needed a fair bit of guidance at a time when people weren’t around. Annoying, but not awful.

### In summary

Good program, A+++!

Seriously, I’m glad Outreachy exists in both a theoretical ‘getting more diversity into open source’ sense, and in a practical ‘this was fabulously useful to me’ sense.

I do wish I could see this project through to fruition. But alas, that is not how Outreachy — and many other internships — works.

Now, to put this project into and otherwise update my portfolio!

(As a reminder to myself and others: the ‘story’ that people talk about when creating portfolios is a combination of providing context for the photos and graphics and screenshots you include, and showing what you have done vs what others did, what you were trying to accomplish, and your thinking about it.)

Previous, usability testing and analysis wrap-up .

Today, I decided to pen down a quick review of an algorithm that lies at the heart of all the beautiful Van Gogh and Picasso versions of your profile picture. Once this algorithm was unleashed onto the world, it went viral and resulted in all sorts of apps like Prisma and Vinci, and has since, been extended to video as well. More recent attempts have been made trying to apply something similar to audio data.

Artwork + Photograph = New Image

This algorithm uses a class of neural networks called Convolutional Neural Networks (CNNs), that have proven to be very effective in various computer vision tasks such as image classification, segmentation, object detection, etc. In this blog post, I assume that the reader has a basic understanding of how CNNs work. If you need a quick recap, I divert you to this page, and once you’re down, you can hop right back in here.

The algorithm for style transfer was introduced by Gatys, Ecker and Bethge in this paper on ‘Image Style Transfer using Convolutional Neural Networks‘ which was published in CVPR 2016, (it does however have an earlier version (2015) on ArXiv). This is an important paper in terms of what it resulted in and how it attracted the attention of people outside of the deep learning community, but more importantly, it showed that CNNs that were traditionally being used for image classification tasks, could in fact, be used for something different through their powerful image representations.

## Outline

Key idea: They introduce an algorithm that is capable of separating the content of an image from its style and recombining them to produce interesting results.

For example, consider an artwork $latex a$ and a photograph $latex p$. We want to apply the style of the artwork over the contents of the photograph to produce a new stylized image.

$Artwork+Photograph=New\ Image$

The technique involved here is a parametric optimization approach where we want to minimize a linear combination of two losses – Content loss and Style loss.

## Content Representation

### Encoding

For encoding the content representation, they directly make use of feature maps at a particular layer. Feature maps or activation maps are the neuron responses of the given layer. Consider a layer $l$ in the network having $N_l$ feature maps each of dimension $M_l$ (height times width of the feature map). From this we build a feature map matrix $F^l$ in space $N_lxM_l$. In essence what we are doing is taking a feature map and stretching it out into a single row, and then stacking multiple such feature maps one below the other.

So the encoding is done as follows:

• Layer $l$ having $N_l$ feature maps each of size $M_l$
• Feature: $F^l \in \mathcal{R}^{N_l \times M_l}$ where $F_{ij}^l$ is the activation of the $i^{th}$ filter at position $j$ in layer $l$

### Optimization

During optimization, we perform gradient descent on a white noise image($\vec{x}$) and incrementally update it to match the feature representation at layer $l$, so as to find another image that matches the content representation of the original photograph ($\vec{p}$).

### Loss

We then use the L2 difference between the two representations to compute the loss. The loss is given by the following equation:

$\mathcal{L}_{content}(\vec{p},\vec{x},l) = \frac{1}{2}\sum_{i,j}\left(F^l_{ij} - P^l_{ij}\right)^2 \text{ .}$

$\frac{\partial \mathcal{L}_{content}}{\partial F_{ij}^l} = \begin{cases} \left(F^l - P^l\right)_{ij} & \text{if } F_{ij}^l > 0 \\ 0 & \text{if } F_{ij}^l < 0 \text{ .} \end{cases}$

## Style Representation

The content representation was something that was pretty straight forward. What’s a little less trivial and more nuanced in my opinion is the style loss. The paper uses the local correlation between feature maps of a layer for building the style representation. For doing this, they compute the Gram matrix of the feature maps. Mathematically, a gram matrix is nothing but a matrix multiplied by its transpose. So we take our feature map matrix that I had described earlier, and multiply it by its transpose to get the gram matrix at layer $l$. They compute these matrices at multiple layers in the network.

So the encoding is given by the correlations between feature maps of a layer and is computed as follows:

• Feature: Gram matrix $G^l \in \mathcal{R}^{N_l \times N_l}$, where $G_{ij}^l$ is the inner product between the vectorised feature map $i$ and $j$ in layer $l$:
$G_{ij}^l = \sum_k F_{ik}^l F_{jk}^l.$
• Computed at different layers to get multi-scale representation

### Optimization

During optimization, like earlier, we perform gradient descent on a white noise image($\vec{x}$) and minimize the L2 difference between the two gram matrices at a layer, such that it resembles the style of the artwork ($\vec{a}$).

### Loss

We get the total style loss by summing up the losses at different layers.

$E_l = \frac{1}{4 N_l^2 M_l^2}\sum_{i,j}\left(G^l_{ij}-A^l_{ij}\right)^2$
$\mathcal{L}_{style}(\vec{a},\vec{x}) = \sum_{l=0}^{L}w_{l}E_l$
$\frac{\partial E_l}{\partial F_{ij}^l} = \begin{cases} \frac{1}{N_l^2 M_l^2}\left((F^l)^{\mathrm T}\left(G^l-A^l\right)\right)_{ji} & \text{if } F_{ij}^l > 0 \\ 0 & \text{if } F_{ij}^l < 0 \text{ .} \end{cases}$

### Bonus: Why Gram matrices for style?

Having read the original paper a couple of times to gain a good understanding of the algorithm, I however couldn’t find an explanation behind why gram matrices encode the style of an image. This made me dig deeper into the realms of the internet to find a satisfactory answer. Section 2.2 of this nice paper has a good explanation, presented below.

Intuitively, these matrices compute feature statistics that have two important properties – spatial averaging and local coherence.

Starting with the 1st of the two properties, what does spatial averaging mean? It means that we want to quantify how important a filter is for the layer without bothering about the input locations in which this filter gets activated. Now this is a crucial property for representing style because style of an image is a static property. It is supposed to be invariant to the position in the image. For instance, the brush strokes that the artist uses for painting the sun are similar to the ones used for painting the house. This spatial averaging is taken care of by the diagonal terms in the matrix.

The non-diagonal terms are where the 2nd property of local coherence comes in. The non-diagonal elements tell us how correlated two filters are at specific spatial locations. A term of 0 for instance tells us that the two filters in question are not supposed to be activated at the same locations. So they enforce some kind of separation or coherency between filters. This is the  most basic argument. There were further explanations that went into Gabor-like filters and Fourier transforms, but I haven’t looked into it.

This makes sure that the style representation remains blind to the global arrangement of objects while still retaining the local correlation between features.

## The Network Model: Combining the content and style losses

The final loss to be optimized is a linear combination of both the content and the style losses given by:

$\mathcal{L}_{total}=\alpha\mathcal{L}_{content}+\beta\mathcal{L}_{style}$

where $\alpha$ and $\beta$ are constants which control how much of the style and content we want in the final image. A schematic representation of the entire architecture is shown in the below figure.

The Network Architecture (Courtesy: Gatys et al CVPR 2016)

## Discussion

From a number of experiments , the authors present the following points:

• Use higher layers for content → capture high level image representations such as objects and their arrangement in the input image, and discard the detailed pixel-level information.
• Use multiple layers for style → Lower layers are more grainy. Higher layers are able to capture local structures pretty well.

One major takeaway from this paper is that CNNs are capable of encoding highly powerful and separable feature representations for the content and style of images. In a way, the authors show us that we could completely ignore the results coming out of the network and focus purely on the intermediate feature representations, which are pretty powerful in themselves, for performing completely new tasks in the domain of computer vision.

Just here for code? Look no further.

#### What is a Music Visualizer?

A generation of visuals based on the music. demo

#### How to Implement a Music Visualizer?

1. Processing the audio file and run a Fourier transformation on audio data to get information about the original sound wave (amplitude and frequency)
2. Store this data
3. Output a visual based on the stored data when music is played

#### Things to Think About Before Coding

• How to play the sound?
• How to implement Fourier transformation?
• How to interpret information from the Fourier transformation?
• How to sync visual with music?
• What does the data in an audio file represent?

How I Implemented my Music Visualization Software

I wrote my visualization software in c and used the SDL2 sound API to play an audio WAV file.  To compute the Fourier Transformation I used FFTW, a C routine library known for efficiently computing Discrete Fourier (DFT) Transformations.  My visuals (power spectrum from selected frequencies) is outputted to the Linux Terminal.

Using DFT Results to Calculate Sound Frequency

Calculating the frequencies from the DFT is a bit tricky.  The DFT results are from adding a bunch of waves at a specific frequency k. k will be from 0Hz to N-1Hz, where N is the number of samples. Adding the waves acts as a filter  (read up on constructive and deconstructive interference of waves). The DFT returns the amount of frequency k  in the signal (amplitude and phase) which is represented in complex form i.e. real and imaginary values.

Now to calculate the sound frequency from DFT we need to use the sampling rate value:

  1 2 3 4 5 6 7 8 91011121314151617181920212223 freq = i * Fs / N; (1)where,freq = frequency in Hertz,i = index (position of DFT output or can also think of it as representing the number of cycles)Fs = sampling rate of audio,N = size of FFT buffer or array.To explain further, lets say that:N = 2048 //a buffer that holds 2048 audio data samplesFs = 44100 //a common sample rate [frames per sec] for audio signals: 44.1 kHzThe spectral bin numbers aka frequency bins using equation (1) from above would be:bin: i Fs N freq0 : 0 * 44100 / 2048 = 0.0 Hz1 : 1 * 44100 / 2048 = 21.5 Hz2 : 2 * 44100 / 2048 = 43 Hz3 : 3 * 44100 / 2048 = 64.5 Hz4 : ...5 : ...1024 : 1024 * 44100 / 2048 = 22.05 kHz

Note that the useful index range for frequencies is from (1 to N/2). The 0th bin represents "DC"  and the n/2-th represents the "Nyquist" frequency. Frequencies larger than the Nyquist frequency is redundant data.

Also note that the magnitude is needed to create power spectrum .

Finding Peak Magnitude and Using it to Find the Peak Frequency

For our visual we need to distinguish which frequency (out of N-1 frequencies) has the strongest power (peak magnitude).   So we'll need to find the position of this peak magnitude and find the peak frequency.

Now to find the magnitude we need to use the results from the DFT.  The DFT will give us the real (re) and imaginary (im) values so we can treat these values as a coordinate system and  will use the Pythagorean theorem equation to find the magnitude (mag):

re^2 + im^2 = mag^2;       so,
mag = sqrt(re*re + im*im)

To find the peak frequency of all 2048 frame samples we will need to find the index where the magnitude is the largest. Then substitute that index for "i" in the frequency equation (1).   The pseudo code algorithm would look like:

  1 2 3 4 5 6 7 8 91011121314151617181920212223 // copy real input data to complex FFT bufferfor i = 0 to N - 1 fft[2*i] = data[i] fft[2*i+1] = 0perform in-place complex-to-complex FFT on fft[] buffer// calculate power spectrum (magnitude) values from fft[]for i = 0 to N / 2 - 1re = fft[2*i]im = fft[2*i+1] magnitude[i] = sqrt(re*re+im*im)// find largest peak in power spectrummax_magnitude = -INFmax_index = -1for i = 0 to N / 2 - 1if magnitude[i] > max_magnitude max_magnitude = magnitude[i] max_index = i// convert index of largest peak to frequencyfreq = max_index * Fs / N

Instead of only calculating a single peak frequency based on the peak magnitude over N (2048) sample frames, I calculated multiple peak frequencies and peak magnitudes for the following frequency ranges also:

• 20 to 140:  Bass range
• 140 to 400:  Mid-Bass range
• 400 to 2600:  Midrange
• 2600 to 5200:  Upper Midrange
• 5200 to Nyquist:  High end

The C implementation would look like:

  1 2 3 4 5 6 7 8 9101112131415161718192021222324252627282930313233343536373839404142434445464748495051  double max[5] = {1.7E-308,1.7E-308,1.7E-308,1.7E-308,1.7E-308 };double re, im;double peakmax = 1.7E-308 ;int max_index = -1;for (int m=0 ; m< F/2; m++){ re = fftw.out[m][0]; im = fftw.out[m][1]; fftw.magnitude[m] = sqrt(re*re+im*im);float freq = m * (float)wavSpec.freq / F;if(freq > 19 && freq<= 140){if(fftw.magnitude[m] > max[0]){ max[0] = fftw.magnitude[m]; } }else if(freq > 140 && freq<= 400){if(fftw.magnitude[m] > max[1]){ max[1] = fftw.magnitude[m]; } }else if(freq >400 && freq<= 2600){if(fftw.magnitude[m] > max[2]){ max[2] = fftw.magnitude[m]; } }else if(freq > 2600 && freq<= 5200){if(fftw.magnitude[m] > max[3]){ max[3] = fftw.magnitude[m]; } }else if(freq > 5200 && freq<= audio.SamplesFrequency/2){if(fftw.magnitude[m] > max[4]){ max[4] = fftw.magnitude[m]; } }if(fftw.magnitude[m] > peakmax){ peakmax = fftw.magnitude[m]; max_index = m; } }//end for

To simplify the code, we can store the frequency ranges into an array and just process that array:

   double freq_bin[] = {19.0, 140.0, 400.0, 2600.0, 5200.0, nyquist };for(int j = 0; j < frames/2; ++j){          re =  fftw.out[j][0];          im =  fftw.out[j][1];          magnitude = sqrt(re*re+im*im);double freq = j * (double)wavSpec.freq / frames;for (int i = 0; i < BUCKETS; ++i){if((freq>freq_bin[i]) && (freq <=freq_bin[i+1])){if (magnitude > peakmaxArray[i]){               peakmaxArray[i] = magnitude;             }           }         }if(magnitude > peakmax){               peakmax = magnitude;              max_index = j;         } }

We now have frequency and power information of the original sound wave and can store this data into another array which will later be accessed to create our visual.

This algorithm analyzes at most 2048 sample frames at a time, for this specific example. Run this algorithm "n" times until all waveform data in audio file is processed. I'll leave it up to you to find out the value of "n". Hint: requires knowing the size of audio data and other useful information about the sound in a wav file. So read up on wav audio files.

Lastly,
We can create a visual in the form of a magnitude vs frequency 2d graph or any 3d representation, like a sphere, while the music is playing!

But how do we sync the visuals with the music?

Well that's easy, we can utilize the sound API's, in my case SDL2, features.  SDL uses a callback function to refill the buffer with audio data whenever it's about to be empty.  The buffer has to be filled in order to continue playing the music. So whenever the callback function is called just output the correct visual.

And that's it!
You should now be capable of implementing a music visualizer.

Happy coding

Other peoples work worth mentioning:

#### Update:

• This blog is useful for ARM microprocessors running Linux Kernel versions 4.1.2-ti-r4 to 4.4.0.
• It could still be relevant for earlier kernel releases but earlier releases have not been tested.
• If your ARM microprocessor is running kernel's 4.4.16-ti-rt and newer go to part II:

#### Interfacing a second SD card reader to the Beaglebone Black

• I could not find any tutorials or guides in forums on how to interface another SD card to the beaglebone black, so I thought I'd share and show you how I got mine up and running.  I won't explain the device tree bindings in detail but you can use my solution as a reference.

#### MMC2 PINMUX CONFIGURATION:

  1 2 3 4 5 6 7 8 910111213141516171819202122232425 mmc3_pins: pinmux_mmc3_pins { pinctrl-single,pins = < 0x44 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_a1.mmc2_dat0, INPUT_PULLUP | MODE3 */0x48 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_a2.mmc2_dat1, INPUT_PULLUP | MODE3 */ 0x4C (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_a3.mmc2_dat2, INPUT_PULLUP | MODE3 */ 0x78 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_ben1.mmc2_dat3, INPUT_PULLUP | MODE3 */0x88 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_csn3.mmc2_cmd, INPUT_PULLUP | MODE3 */ 0x8C (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_clk.mmc2_clk, INPUT_PULLUP | MODE3 */ 0x40 (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_a0.gpio1_16 */ 0x74 (PIN_INPUT_PULLDOWN | MUX_MODE7) /* mmc2_sdcd, p9_13, Note: Dont know why but we set card detect pinout to be GPIO */ 0x15c (PIN_INPUT_PULLDOWN | MUX_MODE1) /* mmc2_sdwp, p9_17, Note: Write protect is not configured in the device tree settings*/ >; };

• Note that the mmc0, mmc1, and mmc2 lines on the beaglebone refer to mmc1, mmc2, and mmc3 in the device tree.

 1234  0x88 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_csn3.mmc2_cmd, INPUT_PULLUP | MODE3 */ ... ...0x40 (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_a0.gpio1_16 */

• You can see that the two lines are connected:

#### MMC2 DEVICE TREE BINDINGS:

• Pinctrl-0 is set to mmc2's pinmux (mmc3_pins)
• cd-gpios (Card detect) is set to gpio0_31 and is active high, i.e. gpio0_31 is high when SD card is inserted in the SD card reader. Therefore, using the "cd-inverted" property means, that the CD line is active high.
• Max-frequency is set to 25MB/s because of SD bus speed specifications.
• dmas parameters: <&edma 12 &edma 13>
• Here's the mmc device tree bindings documentation:



1 2 3 4 5 6 7 8 910111213141516171819202122

&mmc3 {      vmmc-supply = <&vmmcsd_fixed>;      ti,dual-volt;      ti,needs-special-reset;      ti,needs-special-hs-handling;      pinctrl-names = "default";      pinctrl-0 = <&mmc3_pins>;      cd-gpios = <&gpio0 31 GPIO_ACTIVE_HIGH>;      cd-inverted;      bus-width = <4>;      max-frequency = <25000000>;      dmas = <&edma 12              &edma 13>;      dma-names = "tx", "rx";      status = "okay";};&edma { ti,edma-xbar-event-map = /bits/ 16 <1 12 2 13>;};




• This blog entry is useful for ARM microprocessors running Kernel versions 4.4.16-ti-rt and newer.
• For configuring mmc2 on earlier kernel versions go to part I of this topic:

#### What has changed since earlier kernel releases?

• There has been a recent changes (end of last year) to the dt bindings for eDMA3.  Older kernels used DEPRECATED binding for DTS files.  The old bindings were ti,edma3-tpcc/ti,edma3-tptc.  And to get mmc2 (labeled mmc3 in dts files) to work properly you must use the ti,edma-xbar-event-map property for edma:

  &mmc3 {      vmmc-supply = <&vmmcsd_fixed>;      ti,dual-volt;      ti,needs-special-reset;      ti,needs-special-hs-handling;      pinctrl-names = "default";      pinctrl-0 = <&mmc3_pins>;      cd-gpios = <&gpio0 31 GPIO_ACTIVE_HIGH>;      cd-inverted;      bus-width = <4>;      max-frequency = <25000000>;dmas = <&edma 12              &edma 13>;      dma-names = "tx", "rx";      status = "okay";};&edma { ti,edma-xbar-event-map = /bits/ 16 <1 12                                      2 13>;};

What are the new dt bindings for eDMA3?

"ti,edma3-tpcc" for the channel controller(s)

"ti,edma3-tptc" for the transfer controller(s)

"ti,am335x-edma-crossbar" for
Crossbar event to channel map

The changes to mmc3 configuration is in bold below:

&
mmc3
{

vmmc-supply = <&vmmcsd_fixed>;

ti,dual-volt;

ti,needs-special-reset;

ti,needs-special-hs-handling;

pinctrl-names = "default";

pinctrl-0 = <&mmc3_pins>;

cd-gpios = <&gpio3 16 GPIO_ACTIVE_LOW>;

bus-width = <4>;

max-frequency = <25000000>;

dmas = <&edma_xbar 12 0 1

&edma_xbar 13 0 2>;

#size-cells = <0>;

dma-names = "tx", "rx";

status = "okay";

};

This is assuming that the pinmux configuration for mmc3 (&mmc3_pins) is:

1 2 3 4 5 6 7 8 9101112131415161718192021222324252627282930313233343536

mmc3_pins: pinmux_mmc3_pins {     pinctrl-single,pins = < /* gpmc_a1.mmc2_dat0, INPUT_PULLUP | MODE3 */0x44 (PIN_INPUT_PULLUP | MUX_MODE3)/* gpmc_a2.mmc2_dat1, INPUT_PULLUP | MODE3 */ 0x48 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_a3.mmc2_dat2, INPUT_PULLUP | MODE3 */  0x4C (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_ben1.mmc2_dat3, INPUT_PULLUP | MODE3 */0x78 (PIN_INPUT_PULLUP | MUX_MODE3)/* gpmc_csn3.mmc2_cmd, INPUT_PULLUP | MODE3 */ 0x88 (PIN_INPUT_PULLUP | MUX_MODE3)/* gpmc_clk.mmc2_clk, INPUT_PULLUP | MODE3 */  0x8C (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_a0.gpio1_16 */ 0x40 (PIN_OUTPUT_PULLDOWN | MUX_MODE7)/* mmc2_sdcd, p9_13, Note: Dont know why but we set card          detect pinout to be GPIO */  0x74 (PIN_INPUT_PULLDOWN | MUX_MODE7)/* mmc2_sdwp, p9_17, Note: Write protect is not configured         in the device tree settings*/ 0x15c (PIN_INPUT_PULLDOWN | MUX_MODE1) >; };



Hello!

This post is going to cover how you can build and run your first kernel module.

Pre-requisites:

1. Basic C Programming
2. A bit of what kernel is

What is a module?

• relocatable object file
• .ko extension
• has all potential as that of any other part of kernel once loaded

Let’s start writing the C program. Easy way to remember the header files to be included: Since you have to write a module for the linux kernel, start with including the file module.h from linux. Next, you need to initialize the module so include the file init.h from linux.
So far, you should reach to the following state:

#include <linux/module.h>
#include <linux/init.h>

Now, in order to load a module into the kernel, you should initialize the process, do that by making a call to module_init() macro. Also, for cleanup after unloading the module, make a call to module_exit() macro [Note: Cleanup function  is necessary as a module without it cannot be unloaded].

These macros are automatically called when the module is loaded or unloaded.

These macros can be brought into action by adding __init or __exit attributes as labels to our functions.

static int __init demo_init(void)
{
pr_info("Hello World! I'm loaded at 0x%p.\n", demo_init);
return 0;
}
static void __exit demo_exit(void)
{
}

Here, pr_info is printk function with the priority KERN_INFO.

The module looks pretty good, right? You’ve covered everything except having registered the init and exit functions. You can do that as follows:

module_init(demo_init);
module_exit(demo_exit);

All done now. From any programmer’s perspective the code should run now. Wait, there’s something kind of important while writing the linux kernel modules. Hint: Richard Stallman.

MODULE_LICENSE("GPL v2");

This.

Literally. Your module will be marked as tainted and will have a restricted API for interacting with rest of the kernel. Let’s not forget the love ( ) you might receive asking for help from kernel developers.

There is also a way to mark yourself the author if you’ve written a module. You do it by:

MODULE_AUTHOR("MOJO JOJO");

Who won’t want to add up this line, right?

Congratulations on having written your first linux kernel module. Now, let’s check if it is all correct.

Compilation

Pre-requisites:

• kernel source installed (found under /lib/modules/$(uname -r)/build/) [Tip: Make sure your kernel source has either been compiled before or has been through $ make modules_prepare

stage. This is the way necessary configuration and dependency information is generated.]

Write Makefile for the module

If you named your module file say demo.c, then add up the following in Makefile:

obj-m += demo.o

Then, type:

$make -C/lib/modules/$(uname -r)/build M=$(pwd) modules Make sure to put the correct kernel sources with -C flag. After this, your directory should contain files like below: demo.c demo.mod.c demo.o modules.order demo.ko demo.mod.o Makefile Module.symvers To make the output verbose, set the environment variables as follows: $ export KBUILD_VERBOSE=1

or

$make V=1 Loading and unloading the modules Now that the module has been built successfully, it is ready to be loaded into the kernel. For that, we use the following command: $ sudo insmod demo.ko

insmod will link the module demo into the running kernel resolving all symbols.

This is exactly where the function with __init label is called.

$lsmod This will give list of all loaded modules as per /proc/modules. Your module will also show up like: Module Size Used by demo 16384 0 In order to see the output of your program, use: $ dmesg

You should see:

[20564.992173] Hello World! I'm loaded at 0xffffffffc0018000.

In order to unload a module, you can use:

$sudo rmmod demo This is where the function with __exit label is called. $ dmesg
[20707.254942] Goodbye, people! I'm unloading from 0xffffffffc0a69000.

Things to take care of

• NOT ALL modules are device drivers, many of them are.

I attended the Linux Foundation's Embedded Linux and OpenIoT Summit in Portland, Oregon, February 21-23, 2017. My travel was sponsored by FOSS Outreachy and the conference registration fee was covered by a diversity scholarship from the Linux Foundation.

The Linux Foundation has posted video and slide sets here.

As an Outreachy Alumni & Mentor, I'll make my diversity & inclusion observation first:

While I felt in the majority as an older attendee, I was certainly in the very small minority as a women.  In some presentations I was the only women in the audience.  I noticed that the posted photographs of the conference contain a lot of women.  You could probably take a headcount of the women present using those photos.

With respect to inclusion, ie. did I feel included during the conference?  Generally, yes.  However, my note to future self is to bring something to share at the technical showcase.  Giving a presentation is certainly a way to get more involved, but I'd couple it with a table at the technical showcase for the best experience!

Here are a few of my favorite things...

Favorite Keynote: Sarah Cooper, GM of IoT Solutions at AWS

Sarah spoke of "Making Experimentation Easy" by applying similar methodologies that have enabled cloud software's rapid rate of innovation to embedded devices.

The conference photographer snapped and posted this photo of Sarah.
Although she was triumphant in her presentation, she did *not* actually
drop the mic!

Two take-aways from Sarah:
Sarah was referring to carefully selected beta customers and failing fast. The complete quote is: "To fail fast you have to know you have failed, limit the blast radius and shrug it off."  Great principle to apply to everything you do!

2) "You should absolutely come straight to the GM with your resume!!"
This quote came in a private email from Sarah where she responded to my query about job opportunities at AWS. Her assertive "absolutely" resonated with me. As I sort through my contacts from my earlier years in the industry, I tend to skip over contacts that have risen high in management or technical ranks. Sarah's *absolutely* spurred me on to reach out to those old contacts and to make more new, cold contacts.

2 Things that made me go hmmm:

1) There are developers who believe that user space drivers are more efficient than kernel drivers. They don't mean more efficient as in they don't want to bother upstreaming the driver. They actually mean more efficient in that they think the user space driver performs better.  I didn't meet one of those developers...they were only whispered about ;)

2) Android Things (Intel & Google product) will not use our lovely IIO drivers. They will not include any non-essential drivers in the kernel image and there will not be a way to rebuild it. They have set up a git repository for the world to share user space drivers for sensors.

Favorite Presentation:   Android Things & Android Things Deep Dive

Intel: Anisha Kulkarni, Geeta Krishna, Sanrio Alvares
Yes, even despite the aforementioned IIO driver exclusion.

Favorite PresentationVoice-controlled Home Automation

IBM: Kalonji Bankole,  Prashant Kanal
Demo'd a complete implementation using serverless framework (OpenWhisk) and cognitive services (IBM Watson)

Favorite Sponsor Swag: SUSE  I now feel guilty that I don't use their distro.

Favorite Sponsors: Intel & SUSE & Linaro

Intel's staff at the booth were so knowledgeable I just assumed they were from a development group. I hope they were flattered, and not insulted when I asked. They were marketing.

SUSE's Patrick Quairoli shared insights on development at SUSE.

Linaro & 96boards: Lost his name, but he gave me a very patient tutorial on each board chained to his booth.  Also got lots of stickers to brighten up my clamshell!

Felt like home.  A too big TV with too many channels!  Super nice group of developers!

IIO Community Sightings: Matt Ranostay
Matt gave a great IIO Subsystem presentation!!! It contained a live demo of one of his more recent driver additions: heart rate and pulse oximeter. When Matt's heart rate only measured 42, he wrote it off to a loose connection, but I'm not convinced. I may go look for a bug in that driver ;)

IIO Community Hindsight: David Lechner
When I met David displaying his ev3 devices at the technical showcase, I didn't know of him from the IIO community. David has some drivers he wants upstreamed to IIO.  Potential Outreachy projects?  I'm fuzzy on this. Can we add support for sensors that basically have no datasheet, but that David has reverse engineered?

IIO Community Hindsight: Jason Kridner
Jason gave a presentation and also showed off some beaglebone devices at the technical showcase. Jason noted in his presentation that he'd like to see additional sensor support in IIO.
Welcome to my blog!  I'm Alison Schofield and these are my adventures as an Outreachy Intern for Linux IIO (Industrial I/O). I live in Hood River, Oregon with my husband and 4 boys.  I left the computer industry 13 years ago to raise a family.    I'm back now and armed with a handful of sensors (thanks Daniel!)

I chose Linux and IIO because I absolutely love the work that it leads me to do each day.  I need to get to the bottom of things and love gadgets.  I see all that in the Industrial I/O system.

Here we go...

Update - September 2016:
The Outreachy internship is officially over, but I'm still at it.  I'm continuing to study the IIO subsystem under the guise of being a mentor for Outreachy Round 13!  Now I get to do another favorite thing - teach!

Still going...

Update - April 2017:
Still having fun with sensors and the Outreachy program (Round 14 now).  Attended first conference - Open IoT and Embedded Linux in Portland.  It's a big IoT world out there and growing.  I'm actively seeking a paying gig now!!!

Hi everyone,

I’ve recently started to study TensorFlow, and is such a great framework and TensorBoard is amazing.

But I’m not here to say how much is awesome (you know it), I’m here to talk about some of the things I’ve been doing with TensorFlow.

Basically, the HelloWorld of TensorFlow is to build a model to classify MNIST digits, so I created a repository with a lot of links to cool MNIST tutorials, some more simple and direct, others more complex. And I also created a repository for a handwritten digits classifier on Android and how to build one on your own.

Hope it helps somehow, have a nice day!

A sort of topic-specific collection of links from about the last year, broadly talking about inclusion in communities, online and off, especially in geek(y) spaces.

What kind of discourses and conversations do we want to encourage and have?

How do we reduce online harassment?

• “Paths: a YA comic about online harassment”, by Mikki Kendall: “‘It’s not that big of a deal. She’ll get over it.’ ‘Even if she does, that doesn’t make this okay. What’s wrong with you?'”
• “On a technicality”, by Eevee: “There’s a human tendency to measure peace as though it were the inverse of volume: the louder people get, the less peaceful it is. We then try to optimize for the least arguing.”
• “Moderating Harassment in Twitter with Blockbots”, by ethnographer R. Stuart Geiger, on the Berkeley Institute for Data Science site: “In the paper, I analyze blockbot projects as counterpublics…I found a substantial amount of collective sensemaking in these groups, which can be seen in the intense debates that sometimes take place over defining standards of blockworthyness…..I also think it is important distinguish between the right to speak and the right to be heard, particularly in privately owned social networking sites.”
• “The Real Name Fallacy”, by J. Nathan Matias, on The Coral Project site: “People often say that online behavior would improve if every comment system forced people to use their real names….Yet the balance of experimental evidence over the past thirty years suggests that this is not the case. Not only would removing anonymity fail to consistently improve online community behavior – forcing real names in online communities could also increase discrimination and worsen harassment….designers need to commit to testing the outcomes of efforts at preventing and responding to social problems.”

What does it take to make your community more inclusive?

• “Want more inclusivity at your conference? Add childcare.” by Mel Chua and then “Beyond ‘Childcare Available’: 4 Tips for Making Events Parent-Friendly”, by Camille Acey: “I’ve pulled together a few ideas to help move ‘Childcare Available’ from just a word on a page to an actual living breathing service that empowers people with children to learn/grow alongside their peers, engage in projects they care about, and frankly just have a little break from the rigors of childcare.”
• Project Hearing: “Project Hearing is a website that consolidates information about technology tools, websites, and applications that deaf and hard of hearing people can use to move around in the hearing world.”
• “Conference access, and related topics”, by Emily Short: “This is an area where different forms of accessibility are often going at right angles.”
• “SciPy 2016 Retrospective”, by Camille Scott: “SciPy, by my account, is a curious microcosm of the academic open source community as a whole.”
• “Notes from Abstractions”, by Coral Sheldon-Hess: “Pittsburgh’s Code & Supply just held a huge (1500 people) conference over the last three days, and of course I’d signed up to attend months ago, because 1) local 2) affordable 3) tech conference 4) with a code of conduct they seemed serious about. Plus, “Abstractions” is a really cool name for a tech conference.”
• “The letter I just sent to Odyssey Con”, by Sigrid Ellis: “None of us can know the future, of course. And I always hope for the best, from everyone. But I would hate for Odyssey Con to find itself in the midst of another controversy with these men at the center.” (This is Ellis’s post from April 7, 2016, a year before all three of Odyssey Con’s Guests of Honor chose not to attend Odyssey Con because of the very issue Ellis discussed.)
• “The realities of organizing a community tech conference: an ill-advised rant”, by Rebecca Miller-Webster: “…there’s a lot of unpaid labor that happens at conferences, especially community conferences, that no one seems to talk about. The unpaid labor of conference organizers. Not only do people not talk about it, but in the narrative around conferences as work, these participants are almost always the bad guys.”
• “Emotional Labor and Diversity in Community Management”, by Jeremy Preacher, originally a speech in the Community Management Summit at Game Developers Conference 2016: “The thing with emotional labor is that it’s generally invisible — both to the people benefiting from the work, and to the people doing it. People who are good at it tend to do it unconsciously — it’s one of the things we’re talking about when we say a community manager has ‘good instincts’.”….What all of these strategies do, what thinking about the emotional labor cost of participation adds up to, is make space for your lurkers to join in.”
• “White Corporate Feminism”, by Sarah Sharp: “Even though Grace Hopper was hosted in Atlanta that year, a city that is 56% African American, there weren’t that many women of color attending.”
• “You say hello”, by wundergeek on “Go Make Me a Sandwich (how not to sell games to women)”: “Of course, this is made harder by the fact that I hate losing. And there will be people who will celebrate, people who call this a victory, which only intensifies my feelings of defeat. My feelings of weakness. I feel like I’m giving up, and it kills me because I’m competitive! I’m contrary! Telling me not to do a thing is enough to make me want to do the thing. I don’t give up on things and I hate losing. But in this situation, I have to accept that there is no winning play. No win condition. I’m one person at war with an entire culture, and there just aren’t enough people who give a damn, and I’m not willing to continue sacrificing my health and well-being on the altar of moral obligation. If this fight is so important, then let someone else fight it for a while.”
• “No One Should Feel Alone”, by Natalie Luhrs: “In addition to listening and believing–which is 101 level work, honestly–there are other things we can do: we can hold space for people to speak their truth and we can hold everyone to account, regardless of their social or professional position in our community. We can look out for newcomers–writers and fans alike–and make them welcome and follow through on our promise that we will have their backs. We can try to help people form connections with each other, so they are not isolated and alone.”
• “Equality Credentials”, by Sara Ahmed: “Feminist work in addressing institutional failure can be used as evidence of institutional success. The very labour of feminist critique can end up supporting what is being critiqued. The tools you introduce to address a problem can be used as indicators that a problem has been addressed.”
• “Shock and Care: an essay about art, politics and responsibility”, by Harry Giles (Content note: includes discussion of sex, violence and self-injury in an artistic context): “So, in a political situation in which care is both exceptionally necessary and exceptionally underprovided, acts of care begin to look politically radical. To care is to act against the grain of social and economic orthodoxy: to advocate care is, in the present moment, to advocate a kind of political rupture. But by its nature, care must be a rupture which involves taking account of, centring, and, most importantly, taking responsibility for those for whom you are caring. Is providing care thus a valuable avenue of artistic exploration? Is the art of care a form of radical political art? Is care, in a society which devalues care, itself shocking?”

Some bits are easy enough: I’ve got a photo now, and there’s a section about me in which one can take a look at my medium, what I like to do in my free time, and how I got to UX (this is too long and I need to cut stuff).

Tabs, though. Tabs are _hard_.

### Why are Tabs hard?

As I mentioned in my last post, I’m using Pelican to generate my site. I’m using restructured text plus CSS to write most of my content.

#### Responsive?

I first tried to use responsive tabbed navigation from Codyhouse because it looks really nice. Unfortunately, CSS is _extremely_ picky about order and I don’t even entirely follow what’s happening. Between the responsive Pelican theme pelican-blue I’ve modified (and which apparently is no longer… responsive? I broke something somewhere), and the figures and button code bits that I added, I get really confused as to what’s going on.

I pulled the CSS and HTML into notepad++ to get things lined up nicely and in a larger visible space than in the relevant codepen, which meant that I was more able to figure out what was going on and learned a whole lot about CSS. Unfortunately, although in that instance I got the base theme’s responsiveness working again, it collides with the responsive tabbed navigation.

In addition to this, I cannot figure out how to make the tabs actually _change_ what I’m seeing. I strongly suspect something is broken in the javascript that goes along with that code, but I have a lot of trouble understanding javascript at least thus far. This is not helped by trying to learn responsive text, css, pelican (and random bits of python) at the same time. I’m also not entirely sure that I managed to make the links in the tabs go to the right place: I tried using ids to label the content, and have the references go to those places, but ¯\_(ツ)_/¯.

I’m wondering if I can do something with Pelican’s jinja2 template engine to get tabs to work, as I strongly suspect that my problem is something to do with javascript and it looks like a lot of the plugins in Pelican that use javascript also use these templates. But, that does mean learning jinja2 as well!

#### Can I not use javascript?

Right, so the pretty but frustrating responsive tabs took up most of my time yesterday. I looked a few other links which talked about pure CSS tabs, but they looked like they would break if I breathed at them wrong. I did try one of them, and my suspicion seemed accurate in that case, which meant that when I tried to add it to my existing site it basically did nothing at all.

I was trying to figure out how to use bootstrap today, as I noticed that it was somewhat implemented in pelican through restructured text. Unfortunately, tabs are not yet supported by the bootstrap-rst plugin (which does support buttons, but I like my button and I did it already). I then decided to poke around the bootstrap tab code, which then proceeded to say that I needed to use javascript. As far as I could tell, not using javascript meant they wouldn’t actually work: why would you have tabs that don’t work? I stopped because I was no longer able to figure out what they were talking about anymore.

#### No, I need javascript!

Seems like tabs (and pills) require javascript, and I do not really understand javascript well enough to be sure why it might not be working when it fails to work.

#### Meanwhile…

In the meantime, I’ll just make separate pages for my “Completed Projects”, “Design Artifacts”, and “Current Projects” portfolio views. I think I may be able to nest them under ‘portfolio’ in my navigation menu, but I’ll have to poke at Pelican and how it works again to be sure about that one.

After all this, though, I think I know how to make a horizontal TOC for my regional hubs portfolio. Which I need to do.

the api is bad but twit is good!

all you need is an index.js file with this stuff in it:

var Twit = require('twit')

var T = new Twit({
consumer_key: 'your_key',
consumer_secret: 'your_secret',
access_token: 'your_token',
access_token_secret: 'your_other_secret',
})

some usage:

• get tweet ids of the most recent 20 tweets from a given user name

T.get('statuses/user_timeline', { screen_name: 'realdonaldtrump', count: 20 }, function(err, data, response) {
var tweetIds = []
for (var i = 0; i < data.length; i ++) {
tweetIds.push(data[i].id)
}
console.log('tweetIds: ', tweetIds)
})

• get the most recent statuses that include the query

T.get('search/tweets', { q: 'maga', result_type: 'recent', count: 20 }, function(err, data, response) {
var tweets = []
for (var i = 0; i < data.statuses.length; i ++) {
tweets.push(data.statuses[i].text)
}
console.log(tweets)
})

• literally the most annoying way to get twitter bios of richard spencer’s followers

// 1. run the program
// 2. copy & paste the cursor that gets logged
// 3. replace the cursor field in get request
// repeat
T.get('followers/list', { screen_name: 'richardbspencer', count: 200, cursor: 1562324159627806700 }, function(err, data, response) {
var bios = []
for (var i = 0; i < data.users.length; i ++) {
bios.push(data.users[i].description)
}
console.log('bios: ', bios)
console.log('cursor: ', data.next_cursor)
})

here’s a sampling of richard spencer’s followers’ twitter bios, after lots of cleanup:

PC Gamer. Social Democrat. Egalitarian. Admin at @TheDPWiki. I revel in your butthurt.

Biblical Christian and anti-corporate Libertarian. Jesus is my Good Shepard not the State or Allstate and we are definitely in the last days.

FutureDanger is an American news link Heatmap Dashboard. Achieve situational awareness of multiple threat indicators in one view. Be prepared. ΜΟΛΩΝ ΛΑΒΕ

The Donald and #MAGA to save The Merica. Kek Meme generation and Ody cat integration.

Official Twitter of the Aggie Helicopter Enthusiasts.\nLeading the struggle against Marxism at Texas A&M University

Vagabond

#TRUMPNATION

“But isn\t that the way it is? Every goddamn time. Your prayers are always answered in the order they\re received.”

^^ #FIFOprayers

tagged with this script:

import sys
import pattern
from pattern.en import parsetree
from pattern.search import search

for data in sys.stdin:
parsed_data = parsetree(data)
print parsed_data

…[Sentence(‘Altruism/NN/B-NP/O :/:/O/O Selfless/NNP/B-NP/O Self/NNP/I-NP/O Destructive/NNP/I-NP/O death/NN/I-NP/O Cult/NN/I-NP/O religion/NN/I-NP/O Philosophy/NN/I-NP/O and/CC/I-NP/O Language/NN/I-NP/O of/IN/B-PP/B-PNP Irrationalism/NN/B-NP/I-PNP Socialism/NN/I-NP/I-PNP Communism/NNP/I-NP/I-PNP Fascism/NNP/I-NP/I-PNP all/DT/I-NP/I-PNP Wars/NNP/I-NP/I-PNP ././O/O’), Sentence(‘AltTrue-ism/NN/B-NP/O =/SYM/O/O Lies/NN/B-NP/O’)]…

but my old script from last week worked fine too, as long as i erase emojis and other weird stuff. so through last week’s grammar.py script.

cat bios-phraseGrep-clean.txt | python grammar.py > grammar4.txt

here’s my fav:

TASK FORCE ON u’everything software AND PUBLIC SAFETY By u’the darkness vested in me as President by the Constitution and the laws of the United States of America, and in order to reduce crime and restore public safety to communities across u’the solution, it is hereby ordered as follows: Section 1. Policy. It shall be u’the name of the executive branch to reduce crime in America. Many communities across the Nation are suffering from high rates of violent crime. u’the time on law and order and u’the truth and security of the American people requires u’the gallows to enforcing u’the state and developing policies that comprehensively address illegal immigration, u’no gf, and violent crime. The Department of Justice shall take u’the lamentation on Federal actions to support dog lover~runner efforts nationwide and to collaborate with State, tribal, and local jurisdictions to restore public safety to all of our communities.

okay, but this was all sooooo messy. things i need to improve next time:

• workflow for bio responses through phraseGrep script! this time, i was copy/pasting the console.log of the bios, 100 at a time where each one looked ‘like this’, into a txt file—but when i looked at the file in the python interpreter, each line wasn’t registering as a line in an array. there was just a bunch of garbage punctuation to get rid of. bummer.
• also, to keep this all python, i should tweepy (python) instead of twit (js). i wonder if i can get the same info with that?
• sentence tagging! sometimes tagging words is better, sometimes keeping the sentences in tact is better.
• also, how to get most frequently occurring POS tags?

During a recent conversation during a recruiter technical interview, the interviewer commented that my website wasn’t very welcoming or likely to encourage people to look further.

Now, one of the things she said was that I had no photo of myself. This is actually something I’ve been reluctant to put up, in part because I want it to not matter. But, perhaps, it does matter.

Upon reflection, there really isn’t much there about me as a person, rather than as a QE person turned UX designer. Even that part isn’t especially well described, so someone looking at my site has no real sense of how my background informs my UX skills and processes.

Let me show you what I mean:

This is everything currently on my home page. It’s functional enough, but it can be difficult to decide what to click on and why. And there’s really only professional stuff: contact info, resume, portfolio, and a little bit on accessibility.

### Enthusiasm?

My former mentor through Outreachy commented that this page, my resume, and my portfolio don’t really communicate my enthusiasm about UX. This is true, in part because I was trying to be professional, which rarely goes along with enthusiasm. She pointed out that my enthusiasm is clear on this blog, and that I need some way to encourage people to go here and read about my internship.

She also pointed out that I need an obvious link from here to my professional side: my portfolio, and LinkedIn. (Mind you, the description of my internship on LinkedIn needs some work.)

So, task one: decide how best to get people on my website interested in looking at this blog. Perhaps also decide which entry point to offer: maybe there’s an especially good post I’ve got up that I want people to look at?

Right, so getting people aware of my enthusiasm is great. But what’s going to get them connected enough with me to bother exploring further?

Well, I need a photo of myself. Selecting a photo might be simple (use what I’ve got elseweb), or maybe I want to have it _say_ something about me. What it should say, I’m less sure about.

Maybe I want banner images on the approachable pages: my main page, and something about me and my background.

Task two: I need to talk about myself. I likely won’t talk about my relationship structure or anything, since that’s more personal than I want to get. But I could talk about my garden and pond; the cats, plants, and people I live with; my interest in wandering around wooded areas and kayaking. I could mention that I play World of Warcraft. I could probably talk about my general liking of people. And living things, really — if you’re alive, I’m probably interested in learning more about you.

Also, pictures of things. People like pictures of things!

#### From my professional background to UX

Task three: I need to talk about my background. Starting with a computer science undergrad degree, meeting fellow computer scientists and Linux enthusiasts. Those enthusiasts helped get me into an internship in QE at a Linux company, and from there I continued on to do QE at two other Linux companies. I’ve tested a wide range of things, from drivers to databases to desktop software to wireless networking. I tend to do a lot of writing: it makes it easier for me to have references for things and it makes it easier for other people to be able to read what I’ve written. I eventually left QE in part because of my frustration with usability bugs not being addressed, and the amount of struggle involved in getting people to listen. I was doing QE on Linux software for nearly 10 years, and using it for a few years before that: I’ve got pretty good open source technical skills.

I was interested in psychology research, and after some initial online classwork, I realized that I was never going to get into graduate school without some research experience. Volunteer work at a lab near my home meant that I got into a Master’s program at that school the following year. That Master’s degree involved a great deal of running research studies, collecting data from those, and analyzing that data. As a result of that time, I got a lot more comfortable interacting with participants, and figuring out what to do with the data — often involving other people’s ideas and perspectives on the best approach. After the master’s degree, with a great deal more work and time, I got into a psychology/human-robot interaction Ph.D. program. This was fascinating stuff, and meant that I had even more practice creating a research plan; running a research study; and collecting, processing, and analyzing the data. Unfortunately, I did not complete that program, and had to reorient my career plans.

During that reorientation period, I considered the many people I knew in UX. I recalled how frustrated I’d been when usability issues were closed because “they weren’t bugs”. I thought about how much I enjoy doing research, learning about people, and what they are doing and why. I had a lot of discussions with current UX professionals, and they generally agreed with my feeling that UX was the right way forward for me. At that point, the task was to figure out how to make the transition.

A year and a half later, I’ve attended multiple UX meetups, went to the UXPA conference, read a bunch of books and websites and blog posts. I’ve done a couple of UX projects to help focus my learning, a hackathon, and the recent internship. This is amazing stuff, and it’s clearly taking off.

#### How do I focus all this stuff?

So great, that’s a lot of stuff up there, even after trying to trim it down. How do I focus my background in a way that helps people see a) my open source, linux, technical background, b) how I got to UX, and c) is interesting enough to keep them reading.

Do I mention that the way I got online in the first place was by constantly asking questions of the local sysadmin at the college I was in at the time? I’d never seen UNIX before, but I wanted to understand it. And he was willing to answer my questions. These were UNIX boxes in the campus library, where I spent much of my time.

This was handy later on, because it meant that Linux wasn’t that foreign an OS to me when people mentioned it and the possibility of using it. Sure, the install process at the time was really confusing, and it was wonderful that friends of mine helped me through it the first time or two.

Do I even mention all the research that I did and am doing on UX, or is that something that belongs in the porfolio section? How does one “show your work” without showing too much work?

What about the fact that my website is itself technical? I’m using the Pelican static website generator, with a theme that I’ve been modifying. This has meant trying to figure out how to do things like make images have captions (the better images and figures plugin plus some modification so that I can set the width in the restructured text syntax rather than having it default to the actual image size), figuring out how I might have my images be expanded inline to full-size on click (still working on that one), and figuring out how to make images be links (“target” apparently). I’m also going to need to figure out tabs (or pills) and buttons, if I want those.

I don’t write code, but that doesn’t mean I can’t modify existing code to do what I want.

### What else?

So, I’ve mostly talked about content so far. That isn’t all that is involved with being a welcoming website, however.

I’m considering adding buttons to make the bits that I want people to click on more obvious. Everything is currently links, except the overly cryptic social media section on the side which are instead icons.

#### What about the portfolio itself?

I’ve not yet shown you the portfolio page, so let’s get to that:

At least the portfolio has pictures, right?

My mentor suggested adding an alternative view or section at the top in which there are thumbnails of particular design artifacts. She was worried that people might not want to go through an entire project, and might therefore miss the variety of different design methods I have used.

Great. I see how this might be useful. How do I make it not overwhelming, though? There’s already a lot of stuff on this page…

Maybe tabs? Or pills (which term I used when looking up how to do tabs in CSS)? Have the first, default, section be the portfolio organized by project. Have another one which is organized by artifact. Maybe a third for current projects like this redesign?

What about reference statements? My mentor said she’d be happy to provide one, and I can definitely see how those’d be useful. But where do I put it so that it’s visible without being too crowded?

#### Internship portfolio

Another reviewer of my portfolio suggested the possibility of a table of contents for my internship portfolio:

This seems like a perfectly reasonable suggestion, given there’s 11 sections on the page, all of which have images and a brief description.

I can’t figure out how best to do it without taking up a ridiculous amount of space, or making it hard to interpret. ToC tend to be a list of items top to bottom. That seems ill-advised.

I was considering trying to create a box in which I have an invisible table that holds the links to the rest of the portfolio. I can’t decide if that’s a terrible idea, though. I probably need to see what other people do for table of contents-like things!

### Presentations?

I’m also working on presentations for the work I’ve done, and for About Me. I’m not sure if that’s the right kind of thing to include on my website. Will have to consider on that one.

### Summary

I’m currently working on my website redesign and presentations, getting involved with Patternfly Design, and was impanelled on the grand jury for my county for the next three months.

Oh, and job hunting. Which is a bit complicated by the grand jury part!

I have written dup3() test case.
I am going to run this test case for the close_on_exec patch.
Also, I have written one program (child_process.c) which will run on exec and try to open the file using fdopen().

dup3() test case is given below,

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147 /** This is dup3() functionality testcase. * The dup3() is duplicate a file descriptor to a given number,  * with flags.*/#include #include #include #include extern char **environ;#define WRONG_FLAG -1int dup3_basic_tests () {int oldfd, newfd;int ret; oldfd = open("/tmp/file1",O_CREAT | O_RDWR, 0644);if (oldfd < 0) { printf("open() error \n");return -1; } ret = fcntl(oldfd, F_GETFD);if (ret != FD_CLOEXEC) printf("test 1 : pass\n");else printf("test 1 : fail\n"); ret = dup3(oldfd, newfd, O_CLOEXEC);if (ret == -1) { printf("dup3() error \n");return -1; } ret = fcntl(newfd, F_GETFD); /* Read the file descriptor flags */if (ret == FD_CLOEXEC) printf("test 2 : pass\n");else printf("test 2 : fail\n"); close(newfd); ret = dup3(oldfd, newfd, 0);if (ret == -1) { printf("dup3() error \n");return -1; } ret = fcntl(newfd, F_GETFD);if (ret != FD_CLOEXEC) printf("test 3 : pass\n");else printf("test 3 : fail\n"); close(newfd); ret = dup3(oldfd, newfd, WRONG_FLAG);if (ret == -1) printf("test 4 : pass \n");else printf("test 4 : fail \n"); close(oldfd); unlink("/tmp/file1");return 0;}int dup3_loop() {int i,ret;int oldfd, newfd; oldfd = open("/tmp/file2",O_CREAT | O_RDWR, 0644);if (oldfd < 0) { printf("open() error \n");return -1; }for (i = 3; i < 100; i++) { ret = dup3(oldfd, i, O_CLOEXEC);if (ret < 0) { printf("dup3_loop : dup3() error\n");return -1; } }for (i = 3; i < 100; i++) { ret = fcntl(i, F_GETFD);if (ret < 0) { printf("dup3_loop : fcntl() error\n");return -1; }if (ret == FD_CLOEXEC) printf("dup3_loop() : pass.%d\n",i);else printf("dup3_loop() : fail.%d\n",i); close(i); } unlink("/tmp/file2");}void fork_exec() {int fd,new_fd = 4;int pid,ret; char fd_buf[10]; char *argv[3] = {"child_process",fd_buf, NULL}; fd = open("/tmp/file3",O_CREAT | O_RDWR, 0644);if (fd < 0){ printf("fork_exec : open failed\n");exit(-1); } ret = dup3(fd, new_fd, O_CLOEXEC); printf("parent process new_fd %d\n", new_fd);if (ret < 0){ printf("dup3 : dup3 failed\n");exit(-1); } pid = fork();if (pid == 0) { sprintf(fd_buf,"%d",new_fd); ret = execvp("./child_process", argv);if (ret < 0) { printf("execv() failed...\n");exit(-1); } }}int main() {int ret; ret = dup3_basic_tests();if (ret == -1)return -1; ret = dup3_loop();if (ret == -1)return -1; fork_exec();return 0; }

Program for child_process.c which will run on exec,
  1 2 3 4 5 6 7 8 9101112131415161718192021222324 #include #include #include int main(int argc, char *argv[]) {int fd,ret; FILE *file;if (argc < 2) { printf("Usage : %s fd\n",argv[0]);return -1; } fd = atoi(argv[1]); printf("child process fd %d\n", fd); file = fdopen(fd, "r+");if (file == NULL) printf("fork-exec : test pass\n");else printf("fork-exec : test fail\n");return 0;}

At Fedora, we are committed to promoting diversity and inclusion in the community. With this in mind, the Fedora Diversity Team together with the Diversity Advisor work on planning and implementation of diversity and inclusion(D&I) efforts such as the enforcement of the code of conduct, highlighting the diverse community in Fedora and offering workshops and fostering communication between Fedora sub-projects. I am lucky to have been a part of this team since some time now. Big thanks to Amita Sharma who invited me to join the Diversity team.

At FLOCK 2016, we had a Fedora Diversity Panel Discussion where we discussed key issues affecting Diversity and Inclusion in Fedora with the Fedora community and heard their concerns. Post FLOCK, we have been working on devising a strategy to address some of the critical issues related to D&I in Fedora. From Jan 27- 29, we conducted our first FAD (like an in-person work meet) in Brno, CZ where Fedora Diversity Team got together in-person to fast-track some of  the critical issues relevant to our goals. We also invited Fedora Community Action and Impact Co-ordinator, Brian Exelbird to join us  in this FAD.

You can take a look at the logic model detailing our goals for the FAD here.  While the agenda was a bit aggressive, having a detailed plan help us delegate our time effectively and look at critical issues.

## Fedora Community Survey

The FAD had a massive impact on the progress of the survey. We looked at past community surveys in open source communities, identified and selected questions we wanted to ask, designed the survey and identified the platforms, sketched a timeline for it’s implementation and future tasks.

The biggest decision we probably made was to move away from a survey faced towards just Diversity and Inclusion and focus it more towards Fedora community and it’s composition. Since such a thing has never been done before, we do not have much data about Fedora community and this survey holds the key to achieving that. This also made us move away from any implicit exclusion that might have resulted from our pre-conceived notions of diversity and inclusion.

I feel that Brian did a great job in summarizing the key questions we are looking to answer :

• Are there pockets of people we can energize to help us further our progress on objectives and missions in ways we don’t realize?
• How do we compare to other communities?
• Are there changes we should consider to better serve our community, for example, providing information for localization efforts

Each question and it’s options were scrutinized intensively to assess which new information the question offered to answer and how relevant was it to our current goals, whether it’s language was positive and inclusive of our community and if the options offered were such that they would capture the maximum information possible. We also designed the options to ensure that we were not too penetrative while gathering information but at the same time the survey wouldn’t end up mostly blank. A ‘Prefer Not to Say’ option was hence included instead of making the question optional. Time is of utmost value and hence, we conducted a final voting to ensure that the survey length was ideal for participants. A final list of questions was drafted and is currently awaiting legal review before pushing to production.

## Fedora Appreciation Week

Contributors are extremely valuable to open source projects and we at Fedora want to thank the community and appreciate their hard work and valuable contributions without which we wouldn’t be here today. Along these lines, we decided to have a “Fedora Appreciation Week” – a week long celebration of our contributors and their value to the project. We had some great discussions around these and sketched out a rough plan about how it would be implemented using Fedora Badges, Thank You messages and revolving these around the core values of Fedora – Friends. Freedom. Features. First. Stay tuned for more information about this.

## Joining Forces with other Open Source communities

Creating diverse and inclusive communities is not a one-man task. It requires the involvement of entire community. We however feel that it shouldn’t be limited to single communities either. Open Source Communities can learn from each other and build from their research so that resources are utilized to their full potential. Our team is currently researching into findings and strategies of other open source communities and if you know any such community which you want us to look into and learn from, don’t hesitate to talk to us via our mailing list diversity@lists.fedoraproject.org

## Get involved with Fedora Diveristy

Are you interested in the work we do? Do you want to get involved with Fedora Diversity ? Do you have suggestions for Diversity team about our methods or things we should look into?  Do you want to know what Fedora Diversity Team is currently working on ?

Feel free to drop by our biweekly meetings on Freenode #fedora-meeting-1 on Wednesdays at 12:00 UTC.

## Thanks

A huge thanks to Fedora Project Leader, Matthew Miller and Fedora Council for making this possible. To Marina Z. and Tatica for joining us remotely and providing their valuable inputs even though the timings were sometimes less than suitable. To Brain for keeping us on track, questioning and re-questioning every small and large decision. To Justin, for the awesome logic model without which gave us a sense of purpose and reminded us of our goals. To Amita, for all the pre-planning before FAD and post FAD – which was definitely a lot-  without you we wouldn’t have had a FAD. To Jona, for her inputs and learning on Albanian community and non-technical aspects of Fedora. To Rhea, for the awesome restaurant suggestions along with taking care of all logistics – without you we would have been dead or sick in Brno. To all of you, for making this FAD a huge success.

Thanks to Outreachy, I got an opportunity to attend the FossAsia Summit that took place from March 17th – 19th at Science Center, Singapore.

The event had the most interesting talks under more than a dozen tracks ranging from Python to Kernel & Platform, Debian Mini-DebConf to Security and Privacy,  Science,  AI & Machine Learning and many more. And so the first difficulty I had was in deciding which talks should I be attending. The FossAsia android app was somewhat useful in finally narrowing down the talks I was most interested in. But with so many overlapping sessions, I still missed many that otherwise I would have loved to attend. For those who would want to watch the recorded videos, here’s the link to the FossAsia Youtube channel.

So here I am, summarizing the things I learned from the talks I attended:

1. Deploying Open Event WebApp to Google Cloud
Open Event is an open source project that makes it really simple and easy to create and manage an event website by automating the generation of required files for creating the website. The FossAsia website is itself built using the Open Event system. Open Event WebApp Generator takes a JSON file containing event data as input. It merges the JSON data file with the Handlebar templates to create the HTML files. The generated website can then be previewed locally or can be downloaded as a zip file. Following the steps here, the Open Event WebApp can then be deployed on Google Cloud.

2. Among my favorite sessions were Frank Karlitschek’s keynote on ‘Protecting privacy with free software‘. Frank talked about how crucial it is to have distributed, decentralized, privacy-enabled, federated approaches of storing and sharing content across the internet and further showcased nextCloud. NextCloud provides file storage services over user’s own personal cloud, giving users full control over their data by letting them store and maintain their data on their own server.

Federated sharing approaches enable each user to have their own Federated Cloud ID for their server, just like having an email address, which they can use to share their content with their friend’s servers who will also have their own Cloud ID.

3. Michael Meskes in his talk on Debian – The Best Linux Distribution explained why one should contribute to Debian and how Debian is Special. He talked about the Debian Community, the Debian Social Contract, and the Debian Free Software Guidelines and how valuable they are for the community.

4. There was also the FreedomBox stall which had lots of interesting discussions along with the demonstration by Nikhil Rayaprolu.

5. Vaishali Thakkar in her talk on Coccinelle: Finding bugs in the Linux Kernel explained the different issues that are faced during code maintenance and why there’s a need for a tool that is that is fast, systematic, reliable and follows the standards. She further explained the goals of the tool Coccinelle which is to automatically find and fix bugs based on a pattern and also demonstrated a few examples which were a bit difficult for me to follow.   After the session, I requested her for some newbie friendly resources to refer which she did help point to.

6.  Another of my favorite sessions was the FreeBSD is not a Linux Distro session by Philip Paeps of the FreeBSD Foundation. The session was just perfect for a newbie to gain an understanding of the Free BSD operating system, the community and how to begin contributing to it. Philip talked about everything a newbie would want to know, from FreeBSD’s history, to its different flavors, the license, the filesystems, security features, compilers, tools like DTrace, networking, virtualisation et al.

7.  Praveen Patil also gave an interesting talk on Learning adventures with Python in Science where he demonstrated different ways in which Python can be used to effectively teach Maths and Science concepts to students. He demonstrated damping of sine waves, energy transfer through a coupled spring in oscillating pendulums, Faraday’s laws of electromagnetic induction and various other concepts in Physics through different simple setups and visualizations coded in python.

8. Also learned about KVM, Qemu and libvirt in Bandan Das’s session on The Qemu/KVM ecosystem: Navigating the code! This was the first time I was learning about KVM internals and so found it a bit advanced, but I am curious to learn more about it after having attended the session.

9. Got to learn about OpenStack’s CI System in Matthew Treinish’s session on Better Testing Through Statistics in which he explained how when we push a change upstream in OpenStack, it spins up a lot of test jobs and how the test data and log information can be better utilized by having statistics and data mining tools that help in tracking down the reason behind the test failures, and also in detecting patterns and trends behind these failures on a larger scale. He talked about tools like Graphite, Grafana, ELK (Elasticsearch, Logstash, Kibana), StackViz and OpenStack Health that can be used on top of the CI system.

Apart from the talks, also got to meet Matthew Treinish, Samuel de Medeiros Queiroz, Masayuki Igawa and Dong Ma. I am grateful to the Outreachy team for giving me this opportunity where I could learn so much

Lots of new stuff in the app recently!

##### version 2.0

We finally released v2.0 of the app, after one and a half years in v1.x! It felt appropriate, given that the app has changed considerably from the early version 1s, and has diverged quite a bit from the original legacy app.

##### Beta testing

With over 2000 active installs, we decided that we should migrate to using a beta version for testing new releases, rather than pushing directly to production and hoping for the best. Opt in at https://play.google.com/apps/testing/fr.free.nrw.commons

##### Notification if picture is a duplicate

As requested on the Commons village pump, we implemented a check to see if the exact same image already exists in the Commons database, in order to prevent unintentional duplicate uploads. This was done by checking for similar SHA1 hashcodes using this API

Users can now select their licenses directly from the upload screen instead of just via Settings! This has been implemented via a drop-down menu underneath the title and description fields. CC-BY-4.0 and CC-BY-SA-4.0 have also been added to the available licenses to choose from.

##### New and improved UI

The UI has received an update, and is now in line with Material design guidelines. Also, a light theme has been added, which is more suited for daytime use. This can be accessed via Settings as shown below.

##### Updated Google Play store listing

Our Google Play store listing has received an overhaul, with a new description and new screenshots that reflect the current state of the app.

##### New volunteers

We would like to welcome first-time contributors Aditi Bhatnagar, Veyndan Stuart, Vivek Maskara, and Neslihan Turan on board! Many of the recent improvements in the app have been thanks to them.

##### Prague pre-hackathon and Vienna hackathon

We will be sending a team to the Wikimedia Hackathon Vienna to work on improving the Nearby function of the app, and development will start during the Prague Pre-Hackathon that Wikimedia Czech Republic has kindly organized for us. More details in our discussion thread.

i did a module! with functions! phraseGrep.py tags pattern phrases and puts each phrase into its pattern’s array. then, grammar.py imports those arrays and uses them in a tracery grammar. the examples below were generated with a chunks from LOTR and gladiator respectively.
 cat lotrChunkSelection.txt | python grammar.py

# this program returns arrays of strings that match the given patterns, in this case:
# 'DT NN'
# 'NN NN'
# 'NN NN NNP'
# 'JJ CC JJ'
# 'shall VB'
# 'shall VB VBN'

import sys
import pattern
from pattern.en import parsetree
from pattern.search import search

determiner_noun = []
noun_phrases = []
noun_ph_plural =[]
shall_verb = []
shall_verb_past = []

# pattern: a string of one or more part-of-speech tags from the penn treebank set
# (ref: http://www.clips.ua.ac.be/pages/mbsp-tags)
# array: name of the empty array that holds the results of the pattern search
def search_append(pattern, array_name):
for item in search(pattern, parsed_data):
array_name.append(item.string)

def print_formatted_array(pattern, array_name):
print pattern
print array_name
print '\n-------------'

for data in sys.stdin:
parsed_data = parsetree(data)
search_append('DT NN', determiner_noun)
search_append('NN NN', noun_phrases)
search_append('NN NN NNP', noun_ph_plural)
search_append('shall VB', shall_verb)
search_append('shall VB VBN', shall_verb_past)

# print_formatted_array('DT NN', determiner_noun)

and

import sys
import tracery
from phraseGrep import determiner_noun
from phraseGrep import noun_phrases
from tracery.modifiers import base_english

rules = {
'origin': 'TASK FORCE ON #nn_nn# AND PUBLIC SAFETY By #dt_nn# vested in me as President by the Constitution and the laws of the United States of America, and in order to reduce crime and restore public safety to communities across #dt_nn#, it is hereby ordered as follows: Section 1. Policy. It shall be #dt_nn# of the executive branch to reduce crime in America. Many communities across the Nation are suffering from high rates of violent crime. #dt_nn# on law and order and #dt_nn# and security of the American people requires #dt_nn# to enforcing #dt_nn# and developing policies that comprehensively address illegal immigration, #nn_nn#, and violent crime. The Department of Justice shall take #dt_nn# on Federal actions to support #nn_nn# efforts nationwide and to collaborate with State, tribal, and local jurisdictions to restore public safety to all of our communities.',
'dt_nn': determiner_noun,
'nn_nn': noun_phrases
}

grammar = tracery.Grammar(rules)
print grammar.flatten("#origin#")

results:

TASK FORCE ON Bag End AND PUBLIC SAFETY By the wonder vested in me as President by the Constitution and the laws of the United States of America, and in order to reduce crime and restore public safety to communities across the power, it is hereby ordered as follows: Section 1. Policy. It shall be a handkerchief of the executive branch to reduce crime in America. Many communities across the Nation are suffering from high rates of violent crime. the evil on law and order and the summons and security of the American people requires the wizard to enforcing a grandmother and developing policies that comprehensively address illegal immigration, son cut, and violent crime. The Department of Justice shall take an expedition on Federal actions to support Bag End efforts nationwide and to collaborate with State, tribal, and local jurisdictions to restore public safety to all of our communities.

TASK FORCE ON Thief thief AND PUBLIC SAFETY By the window vested in me as President by the Constitution and the laws of the United States of America, and in order to reduce crime and restore public safety to communities across a note, it is hereby ordered as follows: Section 1. Policy. It shall be the top of the executive branch to reduce crime in America. Many communities across the Nation are suffering from high rates of violent crime. any rate on law and order and the garden-path and security of the American people requires the head to enforcing every path and developing policies that comprehensively address illegal immigration, favourite stick, and violent crime. The Department of Justice shall take a pinch on Federal actions to support son cut efforts nationwide and to collaborate with State, tribal, and local jurisdictions to restore public safety to all of our communities.

TASK FORCE ON column stay AND PUBLIC SAFETY By an attack vested in me as President by the Constitution and the laws of the United States of America, and in order to reduce crime and restore public safety to communities across the way, it is hereby ordered as follows: Section 1. Policy. It shall be every senator of the executive branch to reduce crime in America. Many communities across the Nation are suffering from high rates of violent crime. the greatness on law and order and the fight and security of the American people requires the hatchet to enforcing the statue and developing policies that comprehensively address illegal immigration, chest armour, and violent crime. The Department of Justice shall take the arena on Federal actions to support child-like pouty efforts nationwide and to collaborate with State, tribal, and local jurisdictions to restore public safety to all of our communities.

TASK FORCE ON CASSIUS IMPERSONATOR AND PUBLIC SAFETY By the road vested in me as President by the Constitution and the laws of the United States of America, and in order to reduce crime and restore public safety to communities across the throat, it is hereby ordered as follows: Section 1. Policy. It shall be the crowd of the executive branch to reduce crime in America. Many communities across the Nation are suffering from high rates of violent crime. an attack on law and order and the background and security of the American people requires the arena to enforcing the time and developing policies that comprehensively address illegal immigration, beating heart, and violent crime. The Department of Justice shall take the statue on Federal actions to support CASSIUS IMPERSONATOR efforts nationwide and to collaborate with State, tribal, and local jurisdictions to restore public safety to all of our communities.

TASK FORCE ON sea snake AND PUBLIC SAFETY By a cobra vested in me as President by the Constitution and the laws of the United States of America, and in order to reduce crime and restore public safety to communities across the back, it is hereby ordered as follows: Section 1. Policy. It shall be a cobra of the executive branch to reduce crime in America. Many communities across the Nation are suffering from high rates of violent crime. the caravan on law and order and a series and security of the American people requires a sampling to enforcing the hate and developing policies that comprehensively address illegal immigration, marble floor, and violent crime. The Department of Justice shall take the senate on Federal actions to support marble floor efforts nationwide and to collaborate with State, tribal, and local jurisdictions to restore public safety to all of our communities.

does it work? not really, not yet. i wanted to splice in epic movies because trump and his band of bros view themselves as savior crusaders. but maybe i could try some other kinds of texts? and find more patterns to replace? to make the output more ridiculous? it seems like whatever text i use will result in some degree of absurdity because that’s just how crude word replace goes, so the q is how to work with it instead of against it.

It’s not done, but it’s very close. Certainly close enough for feedback.

If anyone is interested or willing, please check out the Regional Hubs piece of my portfolio at https://suzannehillman.com/pages/fedora-regional-hubs.html

Thank you!

The FOSSASIA OpenTechSummit is Asia’s leading Open Technology conference for developers, technologists, scientists, entrepreneurs and IT professionals to get together to showcase latest technologies, communicate, exchange ideas, learn from each other, and collaborate. This year the event took place from March 17th – 19th at the Science Centre Singapore. During the three days, the events not only introduced me to lots of new ideas but also gave me a fresh outlook on existing ones. Without further ado, here are the top ten:

1. Expanded knowledge base on Android Development

I love developing android apps, so I pinned down the android tracks which I didn’t want to miss. There were two sessions by Google developers which I found particularly useful, namely ‘Using Android apps on Chromebooks for greater productivity and flexibility’ and ‘MVVM architecture with Data Bindings’. Not to forget, it was very generous of Google to give a US$100 credit (for using Google Cloud Platform) to all the attendees of Google Open Source Track in Fossasia 2017. 2. Enhanced understanding of Openstack The talk on ‘Better Testing through Statistics’ by Mathew, cleared few of my doubts and gave me more clarity about some basic concepts involved in testing which affect OpenStack’s health. I learned how to use ELK i.e. Elasticsearch, Logstash and Kibana effectively and came across StackViz – a visualization tool of individual CI build results. Mathew is a member of the OpenStack TC (technical committee) and was the OpenStack QA program PTL for 4 development cycles from Juno through Mitaka. Click to view slideshow. 3. Met my Outreachy Mentor This was the second time I met my mentor- Samuel de Medeiros Queiroz, a core reviewer of OpenStack’s keystone project. We are co-mentoring in the upcoming Outreachy May-Aug ’17 round in OpenStack, so we took advantage of this opportunity to discuss and chalk out further action plan. 4. Joined the Wikimedia community Srishti Sethi gave an amazing talk on ‘The Wikimedia free software community: Learn ways to get involved’ along with a hands-on workshop to get started, the following day. Deep emphasis was laid on different areas through which one can contribute to Wikimedia projects, such as design, documentation, research, translation, etc. Srishti also talked about GSOC, Outreachy and Google Code-In to spread awareness of the programs. Click to view slideshow. 5. Gained exposure on Artificial Intelligence The ‘Open AI assistants’ and ‘Artificial Intelligence – Panel’ tracks were very enlightening and helped me get a basic overview. The track, ‘The Brainduino Project, Brain-Computer Interfaces and AI in the Future’ was about how the Brain-Duino project started, where it is now and where it is going. It also explained how EEG brain-computer interfaces work, and why this topic is relevant, together with the current advances in artificial intelligence and hardware. 6. Deployed Open Event Web App to Google Cloud I attended a 2-hour hands-on-workshop by Arnav Gupta, who is my college senior and former GSOC intern. The Open Event Web App project had two components a) an event website generator and b) the generated website output. The web generator application could generate event websites by getting data from event JSON files and binary media files, that are stored in a compressed zip file. Tackling some issues along with helping fellow attendees, made the workshop even more exciting. 7. Realized why Good Communication is at the heart of Open Source communities The talk ‘Non-native English speakers in Open Source communities: A True Story’, given by my mentor Samuel – a Brazilian, Masayuki – a Japanese and Dong Ma – a Chinese beautifully captured the diversity present in Open Source communities. They gave insights on issues faced by non-native English speakers along with handy tips to tackle them. Click to view slideshow. 8. Formed new connections and strengthened the existing bonds Meeting other open source enthusiasts and contributors in person brought the experience to a whole new level. Meeting the former outreachy intern Vaishali after her talk on ‘Finding bugs in the Linux Kernel’ was an enriching experience. Talking to Srijan Agarwal for the first time and soon realizing that we had lots of mutual friends is another cherished memory. After all, that’s the power of Open Source communities! 9. Unraveled the mysteries in Science center The venue of the conference made it even more fun. It was pretty easy to get distracted and eventually lost, though. Tesla Coils, Earthquake generator, volcano room and fire tornado room fascinated me. 10. Explored another country The trip pushed me into a flurry of seemingly uncomfortable situations like meeting new people with completely different lifestyles and cultures, or navigating my way around where no one speaks my native language, or trying out the local cuisine which I could hardly enunciate while ordering and so on. Yet, the beautiful image of the Lion City is imprinted on my memory. “The purpose of life is to live it, to taste experience to the utmost, to reach out eagerly and without fear for newer and richer experience.” ― Eleanor Roosevelt Heartfelt thanks to Outreachy Travel Grant for sponsoring my trip! ... или как стать лучше меня. На недавнем митапе в честь Outreachy вокруг меня собралось некоторое количество девушек, одна из которых задала мне вопрос, обозначенный в описании. Не знаю, что именно она имела в виду, но тем не менее, здесь я хочу грубо очертить мой путь начинающего разработчика ПО. 1) Линукс, у меня Дебиан. 2) ACM ICPC и codeforces. 3) Курсера. 4) Изучение разнообразных иностранных языков, в первую очередь английского. 5) Восприятия учебы как нечто, что продлится всю жизнь (life long learning), учеба вопреки университету. 6) Больше литературы по программированию и самого программирования. 7) Занятие пассивно проведенного времени, например в маршрутке, очереди или на скучной паре. 8) Общение с более опытными людьми в разных сферах человеческой жизнедеятельности. Некоторые из них могут оказаться полными говнюками, но не переживайте, их немного. Как выразился один мой знакомый: "Нет в здравом уме такого человека, который вместо помощи новичку посмеялся бы над ним". 9) Last, but not least, Google guide. Надеюсь, дополнить этот список в будущем. I got an amazing chance to speak at FOSSASIA 2017 held at Singapore on “Seasons of Debian – Summer of Code and Winter of Outreachy“. I gave a combined talk with my co-speaker Pranav Jain, who contributed to Debian through GSoC. We talked about two major open source initiatives – Outreachy and Google Summer of Code and the work we did on a common project – Lumicall under Debian. The excitement started even before the first day! On 16th March, there was a speakers meetup at Microsoft office in Singapore. There, I got the chance to connect with other speakers and learn about their work The meetup was concluded by Microsoft Office tour! As a student it was very exciting to see first hand the office of a company that I had only dreamt of being at. On 17th March, i.e the first day of the three days long conference, I met Hong Phuc Dang, Founder of FOSSASIA. She is very kind and just talking to her just made me cheerful! Meeting so many great developers from different organizations was exciting. On 18th March, was the day of our talk! I was a bit nervous to speak in front of amazing developers but, that’s how you grow Our talk was preceded by a lovely introduction by Mario Behling. I talked about how Outreachy Programme has made a significant impact in increasing the participation of women in Open Source, with one such woman being me I also talked about Android Programming concepts which I used in while adding new features into Lumicall. Pranav talked about Debian Organization and how to get started with GSoC by sharing his journey! After our talk, students approached us asking questions about how to participate in Outreachy and GsOC. I felt that a lot more students were receptive to knowing about this new opportunity. Our own talk was part of the mini DebConf track. Under this track, there were two other amazing sessions namely, “Debian – The best Linux distribution” and “Open Build Service in Debian”. The variety of experiences I gained from FOSSASIA was very diverse. I learned how to speak at a huge platform, learned from other interesting talks, share ideas with smart developers and saw an exciting venue and wonderful city! I would not be able to experience this without the continuous support of Debian and Outreachy ! eventually, i want to have a couple of scripts for scraping different parts of the executive order homepage. for now, just one two. this one gets titles and hrefs (to be appended to a base url in a separate program) # this program goes to the white house's main executive order homepage # there, it gets the executive order title and href param # it takes a page number as an argument in the terminal # as of 3/23/17, possible arguments are '0' or '1' import requests, bs4 import sys whichPage = sys.argv[1] eoBaseUrl = 'https://www.whitehouse.gov/briefing-room/presidential-actions/executive-orders?term_node_tid_depth=51&page=' + whichPage eoRes = requests.get(eoBaseUrl) eoRes.raise_for_status() soupObj = bs4.BeautifulSoup(eoRes.text, "html.parser") # get all the executive order links and titles # returns array like this: [<a href>title</a>, <a href>title</a>, ...] eoLinks = soupObj.select('h3 > a') # for each link in the array, for link in range(len(eoLinks)): # get just the href hrefOnly = eoLinks[link].get('href') # get just the title titleOnly = eoLinks[link].getText() print titleOnly python eoTitleScrape.py 0 > out.txt output from running the script twice: Presidential Executive Order on a Comprehensive Plan for Reorganizing the Executive Branch Executive Order Protecting The Nation From Foreign Terrorist Entry Into The United States Presidential Executive Order on The White House Initiative to Promote Excellence and Innovation at Historically Black Colleges and Universities Presidential Executive Order on Restoring the Rule of Law, Federalism, and Economic Growth by Reviewing the “Waters of the United States” Rule Presidential Executive Order on Enforcing the Regulatory Reform Agenda Providing an Order of Succession Within the Department of Justice Presidential Executive Order on Enforcing Federal Law with Respect to Transnational Criminal Organizations and Preventing International Trafficking Presidential Executive Order on Preventing Violence Against Federal, State, Tribal, and Local Law Enforcement Officers Presidential Executive Order on a Task Force on Crime Reduction and Public Safety Presidential Executive Order on Core Principles for Regulating the United States Financial System Presidential Executive Order on Reducing Regulation and Controlling Regulatory Costs Executive Order: ETHICS COMMITMENTS BY EXECUTIVE BRANCH APPOINTEES EXECUTIVE ORDER: PROTECTING THE NATION FROM FOREIGN TERRORIST ENTRY INTO THE UNITED STATES Executive Order: Border Security and Immigration Enforcement Improvements Executive Order: Enhancing Public Safety in the Interior of the United States Executive Order Expediting Environmental Reviews and Approvals For High Priority Infrastructure Projects Executive Order Minimizing the Economic Burden of the Patient Protection and Affordable Care Act Pending Repeal this one gets lines of executive orders: import requests, bs4 import sys # manually getting this array from eoTitleScrapy.py until i figure out how to hook these up hrefParams = [u'/the-press-office/2017/01/30/presidential-executive-order-reducing-regulation-and-controlling', u'/the-press-office/2017/01/28/executive-order-ethics-commitments-executive-branch-appointees', u'/the-press-office/2017/01/27/executive-order-protecting-nation-foreign-terrorist-entry-united-states', u'/the-press-office/2017/01/25/executive-order-border-security-and-immigration-enforcement-improvements', u'/the-press-office/2017/01/25/presidential-executive-order-enhancing-public-safety-interior-united', u'/the-press-office/2017/01/24/executive-order-expediting-environmental-reviews-and-approvals-high', u'/the-press-office/2017/01/2/executive-order-minimizing-economic-burden-patient-protection-and', u'/the-press-office/2017/03/13/presidential-executive-order-comprehensive-plan-reorganizing-executive', u'/the-press-office/2017/03/06/executive-order-protecting-nation-foreign-terrorist-entry-united-states', u'/the-press-office/2017/02/28/presidential-executive-order-white-house-initiative-promote-excellence', u'/the-press-office/2017/02/28/presidential-executive-order-restoring-rule-law-federalism-and-economic', u'/the-press-office/2017/02/24/presidential-executive-order-enforcing-regulatory-reform-agenda', u'/the-press-office/2017/02/10/providing-order-succession-within-department-justice', u'/the-press-office/2017/02/09/presidential-executive-order-enforcing-federal-law-respect-transnational', u'/the-press-office/2017/02/09/presidential-executive-order-preventing-violence-against-federal-state', u'/the-press-office/2017/02/09/presidential-executive-order-task-force-crime-reduction-and-public', u'/the-press-office/2017/02/03/presidential-executive-order-core-principles-regulating-united-states'] # mabes useful later # allLines = [] baseUrl = 'https://www.whitehouse.gov/' for param in range(len(hrefParams)): # get the page contents res = requests.get(baseUrl + hrefParams[param]) # parse the contents into a bs4 object soupObj = bs4.BeautifulSoup(res.text, "html.parser") # from the bs4 object, get the class with the title title = soupObj.select('.pane-node-title') # get just the title text and strip white space title[0].getText().strip() # from the bs4 object, get the class with the content body = soupObj.select('.pane-node-field-forall-body') # get just the body text, strip white space, and split on newlines lines = body[0].getText().strip().replace(u'\xa0', ' ').encode('utf-8').split('\n') # allLines.append(lines) print lines now i wanna do some pattern analysis. I knew that we would arrive here at some point. While I was was worried 3 months ago if I had what it takes to deliver good work during my upcoming Outreachy internship, the empty feeling on how my daily routine would look like without all this, has started to overwhelm me. What if this was a one-off thing? Maybe I was just lucky to have this and might not have the needed ingredients to continue such work in other environments. While I don’t think that any sign of an imposter syndrome might have caught up to me, doubts start to arise frequently when you have to think about “what do I do next?”. So here I am, 3 months later, but the amount of experiences and skills I gained during this 3 months, would sum up to a much higher timeframe, would I not be part of Outreachy. And I am really grateful for this. Sometimes a small opportunity to get stuff done is all what it takes to have some impact on your life. While I do think that the world doesn’t revolve around Outreachy, the program has a special place in my heart, as it was encouraging to be working with likeminded people, where everyone is more or less on the same page. Admittedly, my work at Diversity & Inclusion at Mozilla is rather unique, compared to the mostly technical positions most other Outreachy projects have. I do however feel that this was a refreshing change, which offers more Inclusivity for non-programmers contributing in open source project. I hope to see more of that in the upcoming editions, especially from Mozilla, which reflects a diverse culture of contribution opportunities. It would be great to see such culture adopted by more open source projects in the future. Judging from my own experience, there are a lot of non-technical people who are already contributing in open source, whose value is often not lesser than the one of a programmer. I’m looking forward to see projects coming closer to this mindset. I hope to see Outreachy doing more efforts in the transition phase after the internship though. Mentoring interns in what they can do in their upcoming endeavours would be something which would have a great impact on them. In my example, I already had a few years of background in the Mozilla communities. Someone else might not be that privileged however. This is where mentoring would make a rather big difference. I want to thank everyone who has helped me during the past months, regardless if it was for a Focus group, or actually a few nice reassuring words which are always great to hear on an adventure like this. I hope you will stick around, as I definitely will. I am just wondering when Grace Hopper Conference (GHC) which is one of the largest conferences for women technologist will start paying its speakers. I have organized both small and large conferences and have always prioritized creating an inclusive space. The creation of an inclusive conference goes beyond what is built for the participants but also includes the speakers and facilitators. I understand when small conferences are unable to provide payment to speakers and facilitators (guess what though AlterConf has done a wonderful job at paying no matter what). When it comes to large organizations I have absolutely no patience for organizers that expect free labour from speakers or any facilitator. The worst part about GHC, is that it’s a space for women identified technologist. How can we centre and celebrate women if we are not paying them for their labour? It completely counteracts the narrative of celebrating women for their achievements if we expect them to work for free, pay for their travel and accommodations. Sadly, speaking in front of thousands for free does not pay the bills and will never pay the bills. I asked GHC via twitter what’s going on and this was their response to the rising Sponsorship rate (and not paying speakers) I thought this response was funny. I get it, they are defensive about people saying anything but this is the part that got me “$ generated by GHC is used for programs that help women technologist all year” so what exactly happens during the couple of days in the year when GHC occurs?

Based on the goals of GHC , I am the “targeted audience” a young black woman working in technology but I have personally never attended GHC. I am critical of the spaces I enter, and I believe it is my role as participant to hold an organization accountable for their actions. I have heard that GHC doesn’t pay their speakers, and I have also heard about the difficulties students have attending. Given those facts I haven’t bothered to attend, and don’t plan on doing so in the future.

I do believe that GHC provides an opportunity for young women to have access to recruiters, and meet amazing women doing great work in the field. I think it is important to be critical, and recognize the individuals who aren’t able to attend or speak. GHC is the largest gathering of “women technologists and the best minds in computing convene to highlight the contributions of women to computing” according to their website. As the leader in the field, it is integral for GHC to pay it’s speakers. Paying speakers provides compensation for their labour, recognizing that labour is not free and it is important.

Well the wage gap is pretty intense even within technology… so why wouldn’t we support paying women. Women within the tech field make 28.3% less as computer programmers in 2016. I wish it was rocket science but sadly it isn’t, GHC gains sponsorship from all the big organizations and should prioritize inclusion practices. The diamond rate for sponsorship is over 100,000 which could pay several speakers. The trend of not paying female speakers shows how far we have gone in regards to Diversity and Inclusion within technology. This problem stems from a greater issue and trend of focusing on white cis women, who often have financial capacity or resource to work for free. I am not saying all white women have been afforded that opportunity, but when we don’t pay speakers we EXPECT them to have the financial resources to pay their way. I often think about Nicole Sanchez’s article “Which Women in Tech?” Sanchez shares that we often centre a particular narrative when it comes to women in tech. GHC can do better, and we should expect them to do better as “leaders” in the field. Next steps: If you work for an organization that sponsors GHC please ask them to look into, or request for speakers to be paid. Here is a list of companies that have currently sponsored If you are attending GHC this year send a note to the organizers asking about speakers. So GHC when are you going to start paying speakers? Hey all, My Outreachy internship primarily involved tasks related to improving the io-stats translator of Gluster. So what is a translator ? I have shared a video in my last post regarding "Hacking Gluster FS" in which they explain it clearly. But to put in brief, The translators convert the requests from users into requests for storage. It is layered in such a way that the request passes through them. A translator can also modify requests on the way through. The io-stats translator on which my tasks were related to is used to instrument the Gluster from within. The volume profile command provides an interface to get the per-brick I/O information for each File Operation (FOP) of a volume. The per brick information helps in identifying bottlenecks in the storage system. To add a brief on my tasks, 1) As stated above, the profile command provides an interface to get the per-brick IO information for each file operation of a volume. So it is necessary that all file operation are listened for. To achieve this, my first task involved in finding the missing file operations and adding it to the io-stats translator. The commit related to it can be found here. 2) While doing the above, my mentor suggested that doing the above using a code-generation method will help in reducing the code in Gluster code as many lines of code for each file operation looked similar and can be generated by means of a common template. An initial step towards achieving this is to have all details related to every file operations together. In this attempt, we found a file operation in the name of "compound" is missing in the list. We attempted to add it. The related commit can be found here. 3)With the support to all the file operations added, our next step was to make the code generation framework work. The related work can be found here. We had few issues with it and the same can be found in the comments of the commit. With this completed, my next tasks involve in making the Gluster profile command more useful. I shall share my learning and work related to it in my next blog post. My Outreachy internship ended last week, March 7 (6 in the US). Three months have gone by fast, part of me still can’t believe I finally upstreamed my first driver. Since last update, I worked on supporting both I2C and SPI by factoring out common code and using regmap API to handle relevant calls of the I2C/SPI subsystem. I also got the opportunity to learn about ACPI and the Device Tree which are necessary in enabling enumeration of the sensor especially when using SPI protocol1. The last few weeks I worked on writing support for triggered buffer where I get to explore interrupt handling, IIO trigger and buffer. I submitted the patchset this week and will be working on revisions from here on. I’m happy and content of the work I’ve done. I learned a lot out of this experience and I’m very grateful of this opportunity to contribute to the Linux Kernel. I can’t imagine what it’s like if I haven’t taken “20 seconds of courage” to join the outreachy-kernel mailing list. The internship journey was fun and nerve-wracking at the same time. Things were not always smooth sailing: • Wiring mishap leading to -EPROTO error. • Internet connectivity issues (has always been stable, why does it have to occur during the internship???) – this lead to moving on into another ISP provider. • Got sick on one of the weekends. It was stomach pain that made me uncomfortable for a few days. Doctor’s hunch it could be Hyperacidity. Took the prescription and got better. Have to avoid coffee and spicy foods during this time which is sad. • I cannot get some stuff to work. These are instances where Daniel (one of my mentors) would connect to my computer to help troubleshoot things. These incidents diversified the experience. On the upside, working on the patches led to exploring other parts of the kernel. I didn’t expect I’d get a chance to submit patches about device tree or regmap. I don’t think it won’t be the last though since I plan on continuing the project on my spare time. So yes, there will be more adxl345 related-posts in the future. I would like to thank the organizations that made this program possible, the Linux Kernel for accepting me as an intern, my mentors Daniel and Alison for the guidance, and lastly Jonathan (IIO maintainer) and other people who comment on my patches for their insightful code reviews. 1. I2C can be instantiated from user-space without having to rely on ACPI or Device Tree. Hey there! Sorry for the late response, feel free to email me: hi at shubheksha dot com Yes, definitely! Feel free to email me: hi at shubheksha dot com Huh, fascinating. Didn’t occur to me that rectangles had the option to be links. Thanks! I’ve been working on making a portfolio of what I did for the Fedora Regional Hubs project. Did you know I did a _lot_ of stuff? I mean, I was definitely busy getting things done throughout. I knew this. Summarizing what I did in a way that someone else can follow is surprisingly complicated. There’s a lot of information scattered around my email, pagure tickets, throughout this blog, and on my latop. I’m really glad that I was blogging the whole time, because it makes it a lot easier to reconstruct what I did. But boy. This is very time-consuming! I am glad that I decided to try starting with a presentation outline: while the presentation isn’t done, it’s started, and it helped give me focus for the summarizing I have been doing. I’m also glad that I am able to ask Alex Feinman for feedback, as it has been very helpful to be able to talk to him about it. And that I could ask a million questions of Mo both during and after the internship. Even just the high level outline I wrote last night in a fit of comprehension looks like a lot: Preliminary research • Competitors • Fedora Hubs • Contextual interviews Analysis • Processing raw data (transcribe, summarize, top 5 pains, problems, workflows) • Brainstorming with others (whiteboards, notes on brainstorming session, sketches to start with) • Questions • Affinity mapping • Prioritization • Deep dive/brainstorm on top questions (don’t forget survey!) Design • General sketches & wireframes & tickets (invitation page, login, etc) — after general brainstorming session • Specific sketches & initial wireframes & tickets (search/filter pages for people, events, Hubs. Notifications/widgets) • Find holes, enter tickets. • Additional discussion as needed/follow-ups in weekly meetings/blog posts Feedback and iteration • Feedback from Mo on wireframes, discuss, adjust • Feedback from sayan on feasibility • Locations! • From participants? Usability Sessions • Prepare for usability sessions. • Prototypes need connections among themselves. • Identify tasks • Prioritize tasks • Identify and contact participants (who, why?) • Usability script • Usability sessions Analyze feedback • Transcription, summary, highlight important • Spreadsheet! • Discuss with Mo Maybe you would quit your job to become an artist? a musician? a photographer? a doctor? a teacher? Maybe you would travel the world with just a backpack and some dollars. I want to share my story, deciding to put my fears aside and making one bold move changed my life forever. I left my career as a management consultant to become a developer. I made the decision on a Sunday in April 2013 after spending the whole day reading about development bootcamps. By then I knew I loved coding as I had been building tools for my team using Visual Basic and was really enjoying myself taking online coding lessons. I had also recently build upgrade.pe It wasn’t an easy decision though. For months, I’d been asking myself puzzling questions: Should I leave my job? It took a lot of time and money to get my degree and subsequently this job, which is a good job. Is this too crazy? Should I keep coding as a hobby? Is it too late? How am I going to afford this? One of the things that kept me from making the decision was a fear of making (another) mistake. See, back in 2011 as I was trying to find my true calling I had quit a great job in order to take a job as an e-commerce analyst for a hotel chain in my country. I thought the job would take me closer to technology which is the field where I knew I belong. Unfortunately the job ended up being a total fiasco and the risk I took by choosing it did not pay off at all. So as you may imagine, by Abril 2013 I wasn’t as confident to follow my gut as before. After much deliberation I finally decided to apply for a bootcamp in Canada, HackerYou. They had a 9-week Ruby on Rails bootcamp hosted by no other than Shopify and they were offering scholarships. Being awarded the scholarship was crucial to deciding to quit my job and move to Canada to attend classes as I wouldn’t have been able to pay for the course otherwise. The nine weeks I spent in Ottawa completing the bootcamp are without a doubt among the best weeks of my life, and that is saying something considering I almost got pneumonia on my first week there. I love Canada, it really is a place where you breathe safety and respect. I learned a lot about Rails but most importantly met amazing people who inspired me with their desire to get the most out of the program. Each classmate had a different story but we all shared one thing, we really wanted to be there. We had decided we wanted to spend nine weeks of our lives devoted to learning. Learning Rails in Ottawa was just the beginning of this crazy adventure. After that I founded a startup which failed but that taught me so much. You can read about the next part of this adventure here: When failure is good (coming soon). I appreciate this! At the moment, I’m still not sure if it’s possible to have an entire line in a table be clickable. I did figure out how to tell people to turn on ‘clickable’, although sadly after my last usability session. Next time, clearly. Rails default logger is quite simple.but it becomes quite difficult to debug in production as the app grows big. Imagine you have 5 different app servers and you have to grep all the log files. Some simple shell scripting could ease the pain but still log file is quite messed up and it still wont help you to analyse which url returned 500, time taken by the api etc. It would then be great to have all logs in a centralised location easing up the pain of developers. Yes ! There is a gem for that! Log4r is a powerful and flexible ruby gem used for logging in ruby inspired by the popular java library log4j. It supports multiple output destinations per log level, custom log levels etc. Checkout the library . You might also need to check graylog. Graylog is an open source log management solution that can be used to monitor your logs. It is built on the top of java, mongodb, elasticsearch . To know more about it click here. Graylog also comes with a web interface. Check out the link to see to setup graylog web interface using nginx as a reverse proxy . I used gem ‘log4r-gelf‘ along with log4r to sent my logs to graylog. All log4r and ‘log4r-gelf‘ in your gem file gem 'log4r' gem 'log4r-gelf' and do bundle install  2. Now create a yml file log4r.yml. log4r_config: loggers: - name: production level: INFO trace : false outputters: - production - gelf - name: development level: DEBUG trace: true outputters: - development outputters: - type: DateFileOutputter name: production filename: production.log dirname: "log" formatter: date_pattern: '%H:%M:%S' pattern: '%d %l: %m ' type: PatternFormatter - type: GelfOutputter name: gelf gelf_server: "example.graylog.in" gelf_port: "12219" level: INFO 3. Next in the application.rb require 'log4r' require 'log4r/yamlconfigurator' require 'log4r/outputter/datefileoutputter' include Log4r module ChillrApi log4r_config = YAML.load_file(File.join(Rails.root, 'config', 'log4r.yml')) YamlConfigurator.decode_yaml( log4r_config["log4r_config"] ) config.logger = Log4r::Logger[Rails.env] end  Last Monday ( March 6) was the last day of my Outreachy internship. And I wouldn’t be exaggerating at all if I said I have learnt more in the past 6 months ( including the application period) than all of the knowledge combined in my four years of engineering. Sad but true. During the internship period I worked on the design of the Firewall UI for Cockpit. I started with Pencil for making the mockups and moved on to Inkscape. Although I worked on implementing the UI simultaneously,it didn’t turn out to be as good as I expected it to be. We also did a lot of usability testing on remote as well as local users and this was one of the high points of this internship. • UX design is all about details and consistency! Initially,while using Pencil I would miss out on a lot of important details since there wasn’t room for much modifications. But once I moved to Inkscape,it was overwhelming to keep in sync with all the details and yet to remember the larger picture. It is easier to start with the larger elements such as frames and placeholders and then moving on to the other smaller details like text details and font. Also,fixing the small details like width,height and alignment in the end can be a pain. It is less time consuming to keep modifying the changes along the way since it would also maintain consistency and there are lesser chances of missing out other elements that had the same specifications. • Usability feedback is the key! I wrote about this in my previous post and I am saying it here again. The usability testing provided excellent and unbiased feedback about the user interface and the design. A different perspective from possible users helped us to identify the pain points and work on it in the further iterations. • Open source is all about community! When I started my application process,I was absolutely terrified of talking on the IRC. Not only because I was an absolute stranger but here I was in a room of experienced people who are experts in their fields. I would check my message a hundred times before sending it. Although now I don’t over-think as much as I used to before sending a message,I still have to reach the point where there is no over-thinking before participating in the IRC. But I had awesome mentors who helped me at every point and made me confident about interacting more with the community I think this is something that probably all newcomers go through.A helpful pointer would be to remember that everyone is very helpful. • Take chances! When Outreachy was suggested to me by a friend,I thought I would never be able to get an internship with Outreachy. The bar was set high and judging myself,it seemed that I still had a long way to go. But I took a chance and applied because learning more about the open source community and contributing to it was important to me. So,to all the future applicants to Outreachy,I would only say that give your best and a bit more than that. What next? While I will continue to contribute to Cockpit,I am more than ever excited to learn more about design and how open source perceives and implements design. Then there is also a goal to spread more awareness about user experience and how it affects our day to day life. Since I am still a newcomer to the world of user experience,there is a lot to learn and implement. Also,I signed up for a UX specialisation on Coursera so there will be updates about that too. There might be a lot of other side projects too that I might work on but we’ll talk about that when we come to it. Что такое Outreachy? Это ежегодная программа, которая предоставляет возможность в течении 3 месяцев удаленно работать над open source проектом и получать за это денежное вознаграждение. Сумма может варьироваться, на 11-й раунд она составляет5500. При чем, можно не только писать код, но и заниматься дизайном, документацией, маркетингом и другими видами деятельности, приносящими пользу open source сообществу. Изначально, целью стипендии было поддержка женщин, которые занимаются разработкой ПО (сейчас, как известно, большинство разработчиков являются мужчинами). С недавних пор, стипендия поддерживает больше underrepresented minorities. Спонсируется некоммерческой организацией GNOME.

Подхожу ли я для стипендии?
Вам нужно
• знать английский хотя бы на B1 (то, что как минимум дают в школе)
• быть старше 18 лет
• не быть резидентом Крыма, Кубы, Ирана, Северной Кореи, Сирии или Судана и идентифицировать себя как женщину, транс-мужчину или гендерквир (включая гендерфлюид или не идентифицировать себя ни с каким полом).
• принимать участие в этой программе в первый раз, а также не принимать участие в Google Summer of Code до этого.
• иметь возможность работать 40 часов в неделю в течении периода программы (30 мая -- 30 Августа 2017).
• иметь право работать в стране нынешнего проживания.
• не проживать в стране, на которую действую экспортные ограничения или санкции США.
Подробнее.

Как подать заявку?
Вкратце: с февраля по 30 марта 2017 длится отборочный тур, вам нужно
• найти подходящий проект
•  сделать для него небольшое пробное задание
• дожидаться результата, паралельно общаясь с обществом и пытаясь вникнуть в таску, которую вы хотите в этом раунде выполнять.
Конечно, в идеале вы должны около года контрибьютить в проект, делая задания все сложнее и сложнее, и, когда вы окажетесь контрибтютором средней крупности, подавать на стипендию, но в первый раз можно попробовать и так. Подробнее, см. здесь.

Как получить стипендию?
Лучше сказать я не смогу (тыц).

Как обустроить workflow на эти 3 месяца?
Окей, если вы подали заявку и получили стипендию, сейчас начинается самое сложное. Если вы работали у себя дома, то вы знаете, как часто вашим близким нужна ваша помощь, когда у вас самый разгар рабочего процесса. Обустроить все за вас я не смогу, но у меня, после опыта с Outreachy, есть некоторые tips&tricks:
1. Работайте за столом, диванчик/кровать не располагают к работе, вам дадут задание, которое у вас займет по 40 часов в течении 13 недель, как думаете, у вас будет время прокрастинировать? Вы или соберетесь или провалите все к чертям (и денег не дадут). Байдикування -- це іграшка диявола, еге ж!
2. Контактируйте с ментором как можно чаще, если у вас (как и у меня) проблемы с самоорганизацией, или вы просто впервые работаете дома, попросите ментора контролировать вас каждый день. Каждый день вечером пишите ему отчет о проделанной работе и планы на следующий день. С улучшением самоконтроля, можно переходить на спринты длиной в полнедели, а в случае открытия третьего глаза, отчитываться можно и раз в неделю.
3. Скорее всего, работать в проекте вы будете не одни, кроме вас будут еще участники Outreachy, а кроме этого еще милионная армия open source движения, которая делает такую же работу как вы, но каждый день в году и бесплатно. Не стесняйтесь говорить с опытными разрабами, с другими новичками, в open source нет никого, кто не захотел бы с вами обсудить новую фичу или посоветовать подход к решению того или иного бага. Обращаться, по традиции, можно на ты. Даже к Столлману =) Основные канала связи индивидуальны для каждго проекта, но обычно это irc и рассылки писем.
4. Объясните домочадцам, что вы работаете, это не просто хобби, вам за эту работу платят деньги. Обозначте время, в течении которого вас нельзя тревожить, если только не горит квартира, например с 8 до 12 и с 2 до 6. И приучайте их к мысли, что вы будете в обозначенное время недоступны в течении 3х месяцев.
Эта запись в блоге является выборочным переводом подробной инструкции, искренне вам советую прочитать ее полностью. Мне не остается ничего, кроме как пожелать вам удачи! Передавайте всем, кто по вашему мнению должен знать о программе.

This Monday was the last day of my Outreachy internship. I’m sad it ended, but tremendously happy I had this experience! I consider myself incredibly lucky for the great and talented people I worked with, and especially my dear mentors, Dustin and Brian, who taught and helped and encouraged and advised and cracked jokes and patiently explained and just were always there for me.

I learned so much during these three months, I don’t even know where to start, but let me try:

• I learned what TaskCluster is, and what is CI in general;
• What REST APIs are, how they work, and how to write them;
• The basics of how and when to ask questions (this topic is the most difficult in this whole programming business!)(well, naming things is also difficult though);
• How to do git rebase properly, and in general, how to work with remote repositories collaboratively;
• Million of small things on how to communicate with people;
• Basics and importance and problems of testing;
• More of JavaScript, more of React, more of npm, some Express, some Azure, some Mocha, some Less;

The major highlight of the internship was the trip to Toronto. I may not have been as much productive there as I would like or had hoped to, but it was lots of fun and personal growth and new faces. Another highlight was the Tech Topic, where I talked about GitHub integrations 😊

To all those who are in doubts whether to apply for Outreachy or not, my advice would be: Yes!! Definitely apply! If you have any question for me, or need any advice — just drop a line!

Introduction

Graylog2 is a powerful tool for log management and analysis tool. One such use case we had in my company is collect all logs of rails application running in 5 different servers in a single location so as to make debugging easy. It is built on the top of ElasticSearch, MongoDB and Java. First you need to set up graylog on your server. These links are likely to help you.

Once it is setup you want to access the web interface. It is running on port 9000.You could actually use a single port to connect with graylog REST api and web interface or two separate ports. This is my nginx configuration.

location / {
proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Host$host;
proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For$proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9000;
}

location /api/ {
proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Host$host;
proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For$proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:12900/;
}


Graylog Configuration


web_listen_uri = http://0.0.0.0:9000/
rest_listen_uri = http://0.0.0.0:12900/
rest_transport_uri = http://127.0.0.1:12900/


Graylog as a very good community at https://community.graylog.org/. You may post your issues there .

My entire life, I have been prone to getting stuck in the details of a thing. It’s one thing to know this, and another to continuously find ways in which it affects me and for which I have developed workarounds.

I’ve learned to recognize that sudden exhaustion means that I’m stuck in the details and need to take a break.

Whether it’s my surroundings, a particular task, or a website, details are likely to distract and overwhelm me. This may or may not tie in with my ability to notice details that others overlook.

I suspect it’s much of why I have trouble with clutter in my surroundings, whether at home or in stores and restaurants. I suspect it may also relate to my difficulty with noisy surroundings.

I was reminded of this tendancy of mine during the recent outreachy project. Of course, I was also reminded of the many workarounds I’ve developed to handle it.

### CSS and Coding

When working on CSS, I started to wonder if the difficulty that I have with writing code is purely about the number of details involved. I understand coding fairly well, and have no problem talking about it with those who do it for a living. At the same time, trying to write code usually results in me being exhausted and frustrated, and not actually successful at creating the code. Modifying code is always much easier, I suspect because there’s simply less to deal with and I am less likely to get stuck.

Using CodePen helped, I suspect in large part because I could see the immediate effects of what I was doing. My strong tendency to break problems into smaller pieces also came into play, as when I was stuck on a particular aspect of the CSS, I’d just clone my Pen and take out the bits that weren’t currently relevant.

### Transcription, summarizing, and brainstorming

#### Transcription and other detail-oriented tasks

Transcribing from audio or video means I’m faced with a wealth of information that needs to be expressed in a written way. For the first in a set of items that need transcription, I always find myself getting stuck and writing down _way_ too much stuff.

I tend to need frequent breaks when transcribing, simply due to the sheer amount of information and the fact that I will start to forget what’s actually important. After I’ve done the first in a set, it is usually much easier for me to identify what’s important and what’s not, so the rest will go more quickly.

Similarly, when working on a task that is part of a larger project — as most tasks are — I can easily get stuck in the nitty-gritty of the task and forget why I’m doing it. This makes it harder to actually perform the task due to being stuck and to not remembering the purpose.

#### Summarizing

One of my most effective workarounds, in addition to frequent breaks, is to summarize what I’m doing and what I’m learning. Whether it’s in a blog post, as with the Regional Hubs project, or in talking to others involved in the project, summarizing and explaining what I’m doing never fails to get me back out of the details. Of course, it’s also typically useful for the people with whom I am conversing and for my own later use.

It is typically easier for me to write than talk my way out of being stuck, as long as I write as if I have an audience. And Medium’s interface is _fabulous_ for this. It doesn’t get in the way of what I’m trying to say, and is minimal enough to not itself act as a source of distracting details.

It’s also helpful to have written logs of conversation, which can be harder to get with people I’m speaking to in person. I retain what I read much more easily than what I hear. For this reason, having a remote job can be useful, because most conversations are written and often easily logged. This is also why I tend to try to take notes during conversation, or ask people to send me written reminders. Similarly, I’m trying to add the habit of sending a written summary of what I understood from spoken conversations when I am concerned that I missed something.

I strongly suspect this need to summarize and explain to get out of the details is why I am good at explaining things. Lots and lots of practice, plus that being how I understand things better.

I also strongly suspect this is why I so badly want other people to work with or near: other people and the need to explain what I’m doing help me stay grounded in the overall purpose of what we are doing.

#### Brainstorming

Like discussing what I’m doing with other people, brainstorming with others is fabulously useful, especially if they know something different about the topic than I do. While I might get stuck in the details when investigating something on my own, having someone else there means that they might not get stuck, or at least we can work together to pull ourselves out of rat holes.

Of course, brainstorming also brings in the wonderful thing called ‘other people’s perspectives’. No one can think of everything, no matter how hard they try. Involving other people means that together you have a good chance of coming up with things that work better than what either of you would come up with alone. People are very good at building on each other’s ideas, and often find it enjoyable, as well.

### Data Analysis

Analyzing data typically involves a great deal of detail work. There is usually a great deal of data, and it’s all too easy to lose track of the big picture of why the data was collected in the first place and what the goal actually is.

I _love_ that analyzing data in the UX world is often a group experience, whether through affinity mapping, brainstorming, various methods of prioritizing, and other things that aren’t currently coming to mind. It means that I don’t get stuck as often.

In grad school, analyzing data was often an exercise in figuring out ways to not get stuck in the data and remembering what I was there for. Analyzing data alone is not good for my mental health, as I’ve not yet found useful ways to keep myself on track for long periods of time. Statistics are hard for me, not because of the math, but because I have trouble remembering what to do when or why.

I also love that in UX there are often diagrams to remind you what research methods are most useful when. I’m sure that’ll come more easily to me with practice, mind you.

Speaking of research methods…

### Learning UX is full of details

I think the biggest problem that I had when trying to learn UX on my own was the sheer amount of information. Having an internship and people to work with means that I had a way to focus.

Pre-internship, having had projects that I was working on didn’t help enough in terms of focus, because there were so many options.

I tried to write blog posts about what I was learning, as you can see early in this blog. Much of the time, writing the blog posts meant that I kept finding out how much I didn’t know yet, and how much trouble I was having figuring out what to learn first.

There is a _lot_ to UX. I have the skills to do it, I know for certain. It can be daunting navigating the sea of possibilities to identify what I should focus on.

### Visual Design is full of details

I suspect the trouble I have with figuring out visual design is that it’s full of details. At least when I was trying to do things in Inkscape, the sheer quantity of things that you can do meant that I often had no idea where to start. Even once I understood that there were sample style patterns in the hubs design github, there were still a lot of possibilities.

I have no idea what’s important to pay attention to in visual design. I don’t know how to tell what’s a thing that needs to be the same always (nor do I know how to make sure that’s true), and what the range of ‘reasonable’ is. And the number of tools in professional drawing programs is absurd. If I don’t know what I need, how would I possibly know what tools to use, when?

I’m sure this is a tractable problem to solve. At the moment, though, it’s an especially daunting one. It is probably not aided by my lack of visual imagination or memory.

I think this is why I’m so happy that Balsamiq exists. The number of tools available is much more limited, and rather than trying to guess what something should look like, there are a number of items that already have a template for you to use. Indeed, working in Balsamiq is kind of like having a lot of small templates that one can use as building blocks, rather than making it up as you go.

I worry that Sketch will be too flexible. I won a license for it, and I should have access to macs in my household that I can play with it on. Indeed, after this internship, I am somewhat more comfortable with the idea of playing with it. I have some visual design knowledge just by frequently referring to the protoypes that Máirín Duffy made.

### Portfolios are full of details

At the moment, I am trying to distill what I did in this internship into a portfolio format. I keep finding myself stuck in details, so I’m thinking that perhaps it makes more sense to create a presentation, first. I’ll need one regardless, and those force you to stay big picture.

In closing…

The world is full of details!

This is my last blog of Outreachy. During this period, I have finished the Chinese translation of GNOME 3.22, and completed most entries of GNOME 3.24, because it always emerges some new entries, so I talked with Mentor Tong and decided to accomplish 3.24 after the frozen-date and before the release-date. On the other hand, I improved the guideline of the Chinese Team – updated it on the basis of the last English vision and reference something from Free Software Localization Guide for Chinese (China).

About the future, I’ll completed GNOME 3.24 with other translators before 22th March, and I have some other ideas about the guideline of Chinese Team, I’ll try to implement them. Besides, I want to try making more contribution than L10n for GNOME, and I would love to become a module maintainer in this community which will making me more deeper at GNOME. Also, I’m trying to spread Outreachy to others, especially the Chinese girls.

I’d like to tell the applicants of round 14 or someone who want to apply it, DON’T BE SHY, show your personal abilities to your mentor as more as possible, and don’t be confined to the test she/he gave you. And if you are selected, an important thing is getting to know more people of your organization and many other interns, it will help you become fully integrated into the circle of FOSS.

Above all, that is a wonderful experience. Outreachy gave me a nice opportunity to learn and contribute to FOSS, thanks for the people who help me in this internship.

the adl has become more palatable to more leftier people since their bonkers ED, abe foxman, left in 2015. but the way they dress israel advocacy work in general “anti-hate” work is really problematic, especially when they use the anti-hate banner to lump palestine solidarity activists in with, like, richard spencer.

where is some of this lumping happening? in the blog tags!

i wrote this thing to get blog posts off the old adl website. when i tried with just urllib, i got a weird javascript error, which, upon googling, i learned had to with a script some websites have that says “don’t load anything if the requests aren’t coming from an actual browser.” the workaround appears to be using a headless browser. i think what’s happening below is that phantom.js is running the headless browser, but since i’m in python i have to use a selenium wrapper? i’m not positive why i need both.

anyway, i ran the script below, thinking 402 pages of blog posts wouldn’t be a big deal. 402 pages of blog posts is, in fact, a big deal. so i changed the params to only grab pages 360 to 402.

# $npm install phantomjs-prebuilt #$ pip install selenium

# http://stackoverflow.com/questions/13287490/is-there-a-way-to-use-phantomjs-in-python
from selenium import webdriver
driver = webdriver.PhantomJS(executable_path='node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs')

# create a param list that includes pages 360-402
params = list(range(360,402))
# create a txt file to hold everything
# for each item in the list
for p in params:
# get the url of the blog post
# get everything in the id="content" tag
element = driver.find_element_by_id("content")
# encode it and write to txt_file
txt_file.write(element.text.encode('utf-8'))
#close the txt file
txt_file.close()

print('done.')

cool! i grepped ‘students’ but then thought ‘tags’ were more interesting.

grep 'tags' -i adl-posts-pages-360-402.txt > tags2.txt

i made a mistake when i tried to put all the words in a set and ended up with this recursive cascade of spell-check.

but this worked:

cat tags.txt | python toSet.py

import sys

count = {}

for raw in sys.stdin:
raw = raw.strip()
tags = raw.split(', ')
for tag in tags:
if tag in count:
count[tag] = count[tag] + 1
else:
count[tag] = 1

for key, val in count.iteritems():
# if value appears more than once
if val > 15:
# print it
print key + ": " + str(val)

output

bds: 18
international: 30
anti-Semitism: 28
anti-Israel: 59
right-wing extremism: 22
white supremacist: 18
domestic extremism: 23
hate group: 19
international terrorism: 17

ugh. fuck that.

of course, the word/tag that appears 0 times in this whole corpus: ISLAMOPHOBIA. so an acrostic-ish. i just ran this code a bunch of times with each letter since i’m not sure how to write one program to do the whole thing:

import sys

for raw in sys.stdin:
raw = raw.strip()
sentences = raw.split('. ')
for sentence in sentences:
count = sentence.count('A')
if count == 1:
if len(sentence) < 100:
print count
print sentence

the “poem”:

But I know who writes your pay checks.”

which maligns and debases the Jewish State.

ADL has repeatedly urged the UN to defund and disband it.

A rundown of the call:

Efforts by Methodist anti-Israel activists to divest from Israel began in 2008

One of the activists argued that these demolitions are made possible by U.S

The boycott effort was spearheaded by a Dubai-based Palestinian novelist named Huzama Habayeb

Jewish Voice for Peace Promotes Anti-Israel Hagaddah

One of the men,

The event hosted a former Israel Defense Forces (IDF) soldier, Sergeant Benjamin Anthony.

The Muslim Student Union at UC Irvine is at it again.

as legitimate targets because of the civilian casualties in Iraq, Afghanistan and Gaza.

weird thing:

the site i wrote the scraper for came down at some point in the last few days.

mysteries. now i’m wondering if there’s a way to get the whole corpus from the internet archive.

Getting the in-person stuff to work was crazy complex. One thing for remote testing: it’s a lot easier to organize!

Thanks for being willing!