A to Z on writing a lint against single-use lifetime names in Rust.

We start with a simple example.

fn deref<'x>(v: &'x u32) -> u32 {
*v
}

fn main() { }

Step 1.

Consider the lifetime 'x . It is only used once throughout. What we are trying to achieve is a lint that warns that 'x is being used only once. So, you can do away with the lifetime binding 'x and proceed ahead as follows.

fn deref(v: &u32) -> u32 {
*v
}

fn main() { }

Current Progress

warning: lifetime name `'x` only used once
--> $DIR/single_use_lifetimes.rs:12:10
|
12 | fn deref<'x>(v: &'x u32) -> u32 {
| ^^
|
note: lint level defined here
--> $DIR/single_use_lifetimes.rs:10:9
|
10 | #![warn(single_use_lifetime)]
| ^^^^^^^^^^^^^^^^^^^

Step 2.

Looking at a bit more complex examples involving structs like the one below.

struct Foo<'a, 'b> { 
f: &'a u32
g: &'b u32
}
fn foo<'x, 'y>(foo: Foo<'x, 'y>) -> &'x u32 {
foo.f
}

This will produce a lint against 'y and suggest a replacement of 'y with an _ .

fn foo<'x, 'y>(foo: Foo<'x, 'y>) -> &'x u32 {
^^ lifetime name `'y` only used once. Use `_` instead.
   foo.f 
}

Why?

Firstly, using _ instead of lifetime names makes it easier to deal with compulsory lifetime declarations in the code. It also makes sure that you don’t need to worry about repeating lifetimes names throughout.

Secondly, this change is a part of the RFC on In-band lifetime bindings. The idea is to eliminate the need for separately binding lifetime parameters in fn definitions and impl headers. Let’s take a look at the example below.

fn outer_lifetime<'outer>(arg: &'outer &Foo) -> &'outer Bar

If 'outer is the only lifetime in use here, you might as well do this.

fn outer_lifetime(arg: &'outer &Foo) -> &'outer Bar

Quoting Niko Matsakis from the PR description on the Github repo

An explicit name like 'a should only be used (at least in a function or impl) to link together two things. Otherwise, you should just use '_ to indicate that the lifetime is not linked to anything.

The next article will explain about the code in detail. Till then, adiós!

Gauri P Kholkar | Stories by GeekyTwoShoes11 on Medium | 2017-12-11 17:37:40

Some Constraints and Trade-offs In The Design of Network Communications: A Summary

This article distills the content presented in the paper “Some Constraints and Trade-offs In The Design of Network Communications” published in 1975 by E. A. Akkoyunlu et al.

The paper focuses on the inclusion of Interprocess Communication (IPC) primitives and the consequences of doing so. It explores, in particular, the time-out and the insertion property feature described in detail below with respect to distributed systems of sequential processes without system buffering and interrupts.

It also touches upon the two generals problem which states that it’s impossible for two processes to agree on a decision over an unreliable network.

Introduction:

The design of an Interprocess Communication Mechanism (IPCM) can be described by stating the behavior of the system and the required services. The features to be included in the IPCM are very critical as they might be interdependent, hence the design process should begin with a detailed spec. This involves thorough understanding of the consequences of each decision.

The major aim of the paper is to point out the interdependence of the features to be incorporated in the system.

The paper states that at times the incompatibility between features is visible from the start. Yet, sometimes two features which seem completely unrelated end up affecting each other significantly. If the trade-offs involved aren’t explored at the beginning, it might not be possible to include desirable features. Trying to accommodate conflicting features results in messy code at the cost of elegance.

Intermediate Processes:

Let’s suppose a system doesn’t allow indirect communication between processes that cannot establish a connection. The users just care about the logical sender and receiver of the messages: they don’t care what path the messages take or how many processes they travel through to reach their final destination. In such a situation, intermediate processes come to our rescue. They’re not a part of the IPCM but are inserted between two processes that can’t communicate directly through a directory or broker process when the connection is set up. They’re the only ones aware of the indirect nature of communication between the processes.

Centralized vs. Distributed Systems:

Centralized Communication Facility

  1. Has a single agent which is able to maintain all state information related to the communication happening in the system
  2. The agent can also change the state of the system in a well-defined manner

For example, if we consider the IPCM to be the centralized agent, it’ll be responsible for matching the SEND and RECEIVE requests of two processes, transferring data between their buffers and relaying appropriate status to both.

Distributed Communication Facility

  1. No single agent has the complete state information at any time
  2. The IPCM is made of several individual components which coordinate, exchange and work with parts of state information they possess.
  3. A global change can take a considerable amount of time
  4. If one of the components crashes, the activity of other components still interests us

Case 1:

In Figure 1, P1 and P2 are the two communicating processes on different machines over a network with their own IPCMs and P is the interface which enables this, with parts that lie on both machines. P handles the details of the network lines.

If one machine or a communication link crashes, we want the surviving IPCM’s to continue their operation. At least one component should detect a failure and be able to communicate. (In the case of a communication link failure, both ends must know.)

Case 2:

Distributed communication can also happen on the same machine given that there are one or more intermediate processes taking part in the system. In that case, P, P1 and P2 will be processes on the same system with identical IPCMs. P is an intermediate processes which facilitates the communication between P1 and P2.

Transactions between P1 and P2 consist of two steps: P1 to P and P to P2. Normally, the status returned to P1 would reflect the result of the P1 to P transfer, but P1 is interested in the status of the over all transaction from P1 to P2.

One way to deal with this is a delayed status return. The status isn’t sent to the sender immediately after the transaction occurs but only when the sender issues a SEND STATUS primitive. In the example above, after receiving the message from P1, P further sends it to P2, doesn’t send any status to P1 and waits to receive a status from P2. When it receives the appropriate status from P2, it relays it to P1 using the SEND STATUS primitive.

Special Cases of Distributed Facility

This section starts out by stating some facts and reasoning around them.

FACT 0: A perfectly reliable distributed system can be made to behave as a centralized system.

Theoretically, this is possible if:

  1. The state of different components of the system is known at any given time
  2. After every transaction, the status is relayed properly between the processes through their IPCMs using reliable communication.

However, this isn’t possible in practice because we don’t have a perfect reliable network. Hence, the more realistic version of the above fact is:

FACT I: A distributed IPCM can be made to simulate a centralized system provided that:
1. The overall system remains connected at all times, and
2. When a communication link fails, the component IPCM’s that are connected to it know about it, and
3. The mean time between two consecutive failures is large compared to the mean transaction time across the network.

The paper states that if the above conditions are met, we can establish communication links that are reliable enough to simulate a centralized systems because:

  1. There is always a path from the sender to the receiver
  2. Only one copy of an undelivered message will be retained by the system in case of a failure due to link failure detection. Hence a message cannot be lost if undelivered and will be removed from the system when delivered.
  3. A routing strategy and a bound on the failure rate ensures that a message moving around in a subset of nodes will eventually get out in finite time if the target node isn’t present in the subset.

The cases described above are special cases because they make a lot of assumptions, use inefficient algorithms and don’t take into account network partitions leading to disconnected components.

Status in Distributed Systems

Complete Status

A complete status is one that relays the final outcome of the message, such as whether it reached its destination.

FACT 2: In an arbitrary distributed facility, it is impossible to provide complete status.

Case 1:

Assume that a system is partitioned into two disjoint networks, leaving the IPCMs disconnected. Now, if IPCM1 was awaiting a status from IPCM2, there is no way to get it and relay the result to P1.

Case 2:

Consider figure 2, if there isn’t a reliable failure detection mechanism present in the system and IPCM2 sends a status message to IPCM1, then it can never be sure it reached or not without an acknowledgement. This leads to an infinite exchange of messages.

Time-outs

Time-outs are required because the system has finite resources and can’t afford to be deadlocked forever. The paper states that:

FACT 3: In a distributed system with timeouts, it is impossible to provide complete status (even if the system is absolutely reliable).

In figure 3, P1 is trying to send P2 a message through a chain of IPCMs.

Suppose if I1 takes data from P1 but before it hears about the status of the transaction, P1’s request times out. IPCM1 has now knowledge about the final outcome whether the data was successfully received by P2. Whatever status it returns to P1, it may prove to be incorrect. Hence, it’s impossible to provide complete status in a distributed facility with time-outs.

Insertion Property

An IPCM has insertion property if we insert an intermediate process P between two processes P1 and P2 that wish to communicate such that:

  1. P is invisible to both P1 and P2
  2. The status relayed to P1 and P2 is the same they’d get if directly connected
FACT 4: In a distributed system with timeouts, the insertion property can be possessed only if the IPCM withholds some status information that is known to it.

Delayed status is required to fulfill the insertion property. Consider that the message is sent from P1 to P2. What happens if P receives P1’s message, it goes into await-status state but it times out before P could learn about the status?

We can’t tell P1 the final outcome of the exchange as that’s not available yet. We also can’t let P know that it’s in await-status state because that would mean that the message was received by someone. It’s also not possible that P2 never received the data because such a situation cannot arise if P1 and P2 are directly connected & hence violates the insertion property.

The solution to this is to provide an ambiguous status to P1, one that is as likely to be possible if the two processes were connected directly.

Thus, a deliberate suppression of what happened is introduced by providing the same status to cover a time-out which occurs while awaiting status and, say, a transmission error.

Logical and Physical Messages

The basic function of an IPCM is the transfer and synchronization of data between two processes. This may happen by dividing the physical messages originally sent by the sender process as a part of a single operation into smaller messages, also known as logical message for the ease of transfer.

Buffer Size Considerations

As depicted in figure 5, if a buffer mismatch arises, we can take the following approaches to fix it:

  1. Define a system-wide buffer size. This is extremely restrictive, especially within a network of heterogeneous systems.
  2. Satisfy the request with the small buffer size and inform both the processes involved what happened. This approach requires that the processes are aware of the low level details of the communication.
  3. Allow partial transfers. In this approach, only the process that issued the smaller request (50 words) is woken up. All other processes remain asleep awaiting further transfers. If the receiver’s buffer isn’t full, an EOM (End Of Message) indicator is required to wake it up.

Partial Transfers and Well-Known Ports

In figure 6, a service process using a well-known port is accepting requests for sever user processes, P1…Pn. If P1 sends a message to the service process that isn’t complete and doesn’t fill its buffer, we need to consider the following situations:

  1. The well-known port is reserved for P1. No other process can communicate with the service process using it until P1 is done.
  2. When the service process times out while P1 is preparing to send the second and final part of the message, we need to handle it without informing P1 that the first part has been ignored. P1 isn’t listening for incoming messages from the service process.

Since none of these problems arise without partial transfers, one solution is to ban them altogether. For example:

This is the approach taken in ARPANET where the communication to well known ports are restricted to short, complete messages which are used to setup a separate connection for subsequent communication.

Buffer Processes

This solution is modeled around the creation of dynamic processes.

Whenever P1 wishes to transfer data to the service process, a new process S1 is created and receives messages from P1 until the logical message is completed, sleeping as and when required. Then it sends the complete physical message to the service process with EOM flag set. Thus no partial transfers happen between S1 and the service process, they’re all filtered out before that.

However, this kind of a solution isn’t possible with well-known ports. S1 is inserted between P1 and the service process when the connection is initialized. In the case of well-known ports, no initialization takes place.

In discussing status returned to the users, we have indicated how the presence of certain other features limits the information that can be provided.
In fact, we have shown situations in which uncertain status had to be returned, providing almost no information as to the outcome of the transaction.

Even though the inclusion of insertion property complicates things, it is beneficial to use the weaker version of it.

Finally, we list a set of features which may be combined in a working IPCM:
(1) Time-outs
(2) Weak insertion property and partial transfer
(3) Buffer processes to allow
(4) Well-known ports — with appropriate methods to deal with partial transfers to them.

Some Constraints & Trade-offs In The Design of Network Communications: A Summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-10 23:28:04

Let’s try to break down the paper “Building On Quicksand” published by Pat Helland and David Campbell in 2009. All pull quotes are from the paper.

The paper focuses on the design of large, fault-tolerant, replicated distributed systems. It also discusses its evolving based on changing requirements over time. It starts off by stating “Reliable systems have always been built out of unreliable components”.

As the granularity of the unreliable component grows (from a mirrored disk to a system to a data center), the latency to communicate with a backup becomes unpalatable. This leads to a more relaxed model for fault tolerance. The primary system will acknowledge the work request and its actions without waiting to ensure that the backup is notified of the work. This improves the responsiveness of the system because the user is not delayed behind a slow interaction with the backup.

Fault-tolerant systems can be made of many components. Their goal is keep functioning when one of those components fail. We don’t consider Byzantinefailures in this discussion. Instead, the fail fast model where either a component works correctly or it fails.

The paper goes on to compare two versions of the Tandem NonStop system. One that used synchronous checkpointing and one that used asynchronous checkpointing. Refer section 3 of the paper for all the details. I’d like to touch upon the difference between the two checkpointing strategies.

  • Synchronous checkpointing: in this case, with every write to the primary, state needed to be sent to the backup. Only after the backup acknowledged the write, did the primary send a response to the client who issued the write request. This ensured that when the primary fails, the backup can take over without losing any work.
  • Asynchronous checkpointing: in this strategy, the primary acknowledges and commits the write. This is done as soon as it processes it without waiting for a reply from the backup. This technique has improved latency but also poses other challenges addressed later.

Log Shipping

A classic database system has a process that reads the log and ships it to a backup data-center. The normal implementation of this mechanism commits transactions at the primary system (acknowledging the user’s commit request) and asynchronously ships the log. The backup database replays the log, constantly playing catch-up.

The mechanism described above is termed as log shipping. The main problem this poses is that when the primary fails and the back up takes over, some recent transactions might be lost.

This inherently opens up a window in which the work is acknowledged to the client but it has not yet been shipped to the backup. A failure of the primary during this window will lock the work inside the primary for an unknown period of time. The backup will move ahead without knowledge of the locked-up work.

The introduction of asynchrony into the system has an advantage in latency, response time and performance. However, it makes the system more prone to the possibility of losing work when the primary fails. There are two ways to deal with this:

  1. Discard the work locked in the primary when it fails. Whether a system can do that or not depends on the requirements and business rules.
  2. Have a recovery mechanism to sync the primary with backups when it comes back up and retries lost work. This is possible only if the operations can be retried in an idempotent way and the out-of-order retries are possible.

The system loses the notion of what the authors call “an authoritative truth”. Nobody knows the accurate state of the system at any given point in time if the work is locked in an unavailable backup or primary.

The authors conclude that business rules in a system with asynchronous checkpointing are probabilistic.

If a primary uses asynchronous checkpointing and applies a business rule on the incoming work, it is necessarily a probabilistic rule. The primary, despite its best intentions, cannot know it will be alive to enforce the business rules.
When the backup system that participates in the enforcement of these business rules is asynchronously tied to the primary, the enforcement of these rules inevitably becomes probabilistic!

The authors state that commutative operations, operations that can be reordered, can be executed independently, as long as the operation preserves business rules. However, this is hard to do with storage systems because the write operation isn’t commutative.

Another consideration is that work of a single operation is idempotent. For example, executing the operation any number of time should result in the same state of the system.

To ensure this, applications typically assign a unique number or ID to the work. This is assigned at the ingress to the system (i.e. whichever replica first handles the work). As the work request rattles around the network, it is easy for a replica to detect that it has already seen that operation and, hence, not do the work twice.

The authors suggest that different operations within a system provide different consistency guarantees. Yet, this depends on the business requirements. Some operations can choose classic consistency over availability and vice versa.

Next, the authors argue that as soon there is no notion of authoritative truth in a system. All computing boils down to three things: memories, guesses, and apologies.

  1. Memories: you can only hope that your replica remembers what it has already seen.
  2. Guesses: Due to only partial knowledge being available, the replicas take actions based on local state and may be wrong. “In any system which allows a degradation of the absolute truth, any action is, at best, a guess.” Any action in such a system has a high probability of being successful, but it’s still a guess.
  3. Apologies: Mistakes are inevitable. Hence, every business needs to have an apology mechanism in place either through human intervention or by automating it.

The paper next discusses the topic of eventual consistency. The authors do this by taking the Amazon shopping cart built using Dynamo & a system for clearing checks as examples. A single replica identifies and processes the work coming into these systems. It flows to other replicas as and when connectivity permits. The requests coming into these systems are commutative (reorderable). They can be processed at different replicas in different orders.

Storage systems alone cannot provide the commutativity we need to create robust systems that function with asynchronous checkpointing. We need the business operations to reorder. Amazon’s Dynamo does not do this by itself. The shopping cart application on top of the Dynamo storage system is responsible for the semantics of eventual consistency and commutativity. The authors think it is time for us to move past the examination of eventual consistency in terms of updates and storage systems. The real action comes when examining application based operation semantics.

Next, they discuss two strategies for allocating resources in replicas that might not be able to communicate with each other:

  1. Over-provisioning: the resources are partitioned between replicas. Each has a fixed subset of resources they can allocate. No replica can allocate a resource that’s not actually available.
  2. Over-booking: the resources can be individually allocated without ensuring strict partitioning. This may lead to the replicas allocating a resource that’s not available, promising something they can’t deliver.

The paper talks also about something termed as the “seat reservation pattern”. This is a compromise between over-provisioning and over-booking:

Anyone who has purchased tickets online will recognize the “Seat Reservation” pattern where you can identify potential seats and then you have a bounded period of time, (typically minutes), to complete the transaction. If the transaction is not successfully concluded within the time period, the seats are once again marked as “available”.

ACID 2.0

The classic definition of ACID stands for “Atomic, Consistent, Isolated, and Durable”. Its goal is to make the application think that there is a single computer which isn’t doing anything else when the transaction is being processed. The authors talk about a new definition for ACID. which stands for Associative, Commutative, Idempotent, and Distributed.

The goal for ACID2.0 is to succeed if the pieces of the work happen: At least once, anywhere in the system, in any order. This defines a new KIND of consistency. The individual steps happen at one or more system. The application is explicitly tolerant of work happening out of order. It is tolerant of the work happening more than once per machine, too.

Going by the classic definition of ACID, a linear history is a basis for fault tolerance. If we want to achieve the same guarantees in a distributed system, it’ll require concurrency control mechanisms which “tend to be fragile”.

When the application is constrained to the additional requirements of commutativity and associativity, the world gets a LOT easier. No longer must the state be checkpointed across failure units in a synchronous fashion. Instead, it is possible to be very lazy about the sharing of information. This opens up offline, slow links, low-quality datacenters, and more.

In conclusion:

We have attempted to describe the patterns in use by many applications today as they cope with failures in widely distributed systems. It is the reorderability of work and repeatability of work that is essential to allowing successful application execution on top of the chaos of a distributed world in which systems come and go when they feel like it.

P.S. — If you made it this far and would like to receive a mail whenever I publish one of these posts, sign up here.


Building on Quicksand: A Summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-08 21:19:06

Hello,

    My name is Sandhya Bankar, recently finished my internship with The Linux Foundation as Outreachy intern.

I am Open Source enthusiast who is passionate about to learn and explore the Linux kernel. It is a fun to work and enjoy bit more. 
I was the Linux kernel Interned through Outreachy in Round 13. I have just completed Outreachy Internship (https://wiki.gnome.org/Outreachy/2016/DecemberMarch). Outreachy is hosted by Software Freedom Conservancy with special support from Red Hat and the GNOME Foundation for Women in technology to support women participating in free and open source software since contributors to free and open source projects have mostly been men.

I have been selected for the project "radix tree __alloc_fd" under the guidance of mentor Matthew Wilcox and Rik Van Riel. I got amazing support through the mentor for the __alloc_fd project.
Specific about the project, I have worked for patchset of the IDR (integer ID management). IDR is like radix tree structure. In this currently converting file allocation code to use the IDR. The file descriptors are allocated using a custom allocator. So patchset of this will replaces the custom code with an IDR. This replacement will result in some memory saving for processes with relatively few open files and improve the performance of workloads with very large numbers of open files. The link to submitted patch is


http://marc.info/?a=146149549800002&r=1&w=2


I have completed my Master of Engineering in Computer Networking and Bachelor of Engineering in Electronics engineering with distinction. I have done P.G. Diploma course in embedded system and VLSI design from CDAC Pune., Maharashtra, India. My initial preference is C programming through the data structure and algorithms. Extensive experience of Patch sending through git via mutt.


Other than technical studies I love reading books. I also like to spend time on outdoor games.

Please do visit my LinkedIn Profile,

https://www.linkedin.com/in/sandhya-bankar-78938498/

Sandhya Babanrao Bankar | Kernel Stuff | 2017-12-07 23:24:39

So this is a blog post about some songs that have been stuck in my head lately, that are on my playlist, and that I found myself thinking about why I enjoy them.
Without further adieu, here they are.

1. Both Sides Now - Joni Mitchell


A beautiful musing on love. The experience and wisdom the years implied in the song's lyrics come through in Joni's voice and in the accompanying instrumentation of the song. Lovely for when you just want to reflect.

2. LoveStoned/I Think She Knows - Justin Timberlake


When I was younger, I was all about how much noise a song had. The upbeat tempo that kept you dancing. So the 'I Think She Knows' interlude didn't interest me; it pretty much went over my head. Listening to the song now, I agree with John Mayer; the 45 seconds or so of 'I Think She Knows' really are sonic bliss. At least to me. Your mileage may vary 😎.

3. She Has No Time - Keane

 
A lovely song for when you wonder so much about what could have been that your heart aches and you almost want to curse the day you met that someone special. 

"You think your days are uneventful / And no one ever thinks about you ", "You think your days are ordinary / And no one ever thinks about you / But we're all the same" 

We all feel this way at some point. We think we must be boring, unexciting, uninteresting, somehow inherently below par. Not good enough. The closing line reminds us that before we throw that pity party, it's not just us who feel strangely uninspiring; we all do from time to time. It isn't the reason why someone would leave. 

"Think about the lonely people / Then think about the day she found you / Or lie to yourself / And see it all dissolve around you" 

Remember that there was a 'before', a moment when you'd give anything to feel what you felt, to have that experience of cherishing someone however brief. They also tell you that you have a choice: you can choose to be grateful, to keep the memory of that moment as something wonderful that happened and you are happy that it happened, or you can ruin it with sour grapes, and say to yourself: "It didn't matter much" 

What you had was beautiful, accept it and move on. 

4. Happen - Emeli Sande

 
I remember when I first listened to the song, Emeli's soaring vocals caught me offguard and made me realize that I had missed half the song already. I think of this song as poetry made music, if that makes sense 😉. 

There are a million more dreams being dreamt tonight / But somehow this one feels like it just might / Happen / Happen to me

I won't try to explain. Just listen to the song.  

5. Same Ol' Mistakes (cover, orig. by Tame Impala) - Rihanna

https://www.youtube.com/watch?v=HbfuDghCEKI

I typically try to listen to the original of a song, if I like the cover. I make an exception for Rihanna because she does a good job with this song. Honestly I could unpack this song for days. So below are my favourite (lyrical) bits.

I know you don't think it's right / I know that you think it's fake / Maybe fake's what I like / The point is I have the right

Very true. 'Nuff said :D

Man I know that it's hard to digest / But maybe your story / Ain't so different from the rest / And I know it seems wrong to accept / But you've got your demons / And she's got her regrets
Man I know that it's hard to digest / A realization is as good as a guess / And I know it seems wrong to accept / But you've got your demons / And she's got her regrets..

It is hard to take when you're made to consider that maybe, just maybe, you're not alone in what you feel. That it has been done to death by other people. And the only reason why you think it is special, is because it's your first time or you have too much hope or whatever. Sooner or later, baggage comes into the picture. What you were running from catches up with you. The circle of life, or karma, if you will.

Leni Kadali | Memoirs Of A Secular Nun | 2017-12-07 20:57:35

Making Hamburger menu functional, Attending Wikimania’17, Continuing the work on enhancing usability of the dashboard on mobile devices.

Week 9

Completion of refactoring Navigation bar Haml code to React, worked on resolving some unwanted behavior. Added links to the hamburger menu and made the required styling changes.

Active Pull Request:

[WIP] Hamburger menu by ragesoss · Pull Request #1332 · WikiEducationFoundation/WikiEduDashboard

Week 10 & 11 — Wikimania 2017 fever

Attending Wikimania’17. Making hamburger-menu work on Chrome and Firefox. Successful meeting with Jonathan to understand documentation format of user testings done during the internship. Traveling and post Wikimania Blogging.

Link to the blog posts.

Week 12

Made the Hamburger menu cross browser compatible and added styling to dashboard pages for making them mobile friendly using CSS media queries.

Screen shots of the Mobile Layout

Hamburger menu in action.

Sejal Khatri | Stories by Sejal Khatri on Medium | 2017-12-06 09:32:30

UX folks may be in the best position to identify ethical issues in their companies. Should it be their responsibility?

This is the final piece of the story I’ve been telling. It started with an explanation of some of the problems currently present in the implementation of UX practices. I then described various ethical problems in technology companies today.

I will now explain how UX folks are uniquely situated to notice ethical concerns. I will also explain how, despite their unique perspective, I do not think that UX folks should be the gatekeepers of ethics. Much like UX itself, ethical considerations are too likely to be ignored without buy-in from the top levels of a company.

Ethics and UX

Ethics and user experience are tied together for a few reasons:

  • Folks who are working on the user experience of a piece of software will often have a good view on the ethics of it — if they stop to consider it.
  • UX folks are trained to see the impact of a product on people’s lives. We are a bridge between software and humans, and ethical concerns are also in that space.
  • Like UX, ethics needs buy-in throughout the company. It can otherwise be difficult or impossible to enforce, as ethical considerations can be at odds with short-term company priorities like shareholder profits or introducing convenient (but potentially problematic) features.

Given that UX folks are in a great position to see ethical problems as they come up, it may be tempting to suggest that we should be the ones in charge of ethics. Unfortunately, as I described in an earlier section, many UX folks are already struggling to get buy-in for their UX work. Without buy-in at the top level, we are unlikely to have the power to do anything about it, and may risk our jobs and livelihoods.

This is made worse by the fact that there are a lot of new UX folks in the Boston area. If they are on the younger side of things, they may not realize that they are being asked to do the impossible, or that they can push back. New UXers may also have taken out student loans, whether as an undergraduate student or to enable a career change into UX, thereby effectively becoming indentured servants who can’t even use bankruptcy to escape them.

Even new and career-changer UX folks who have not taken out loans can feel like they can’t afford to annoy the company they’re working for. Given how few entry-level jobs there are — at least in the Boston area — it’s a huge risk for someone new to UX to be taking.

The risk of pointing out ethical problems is even worse when you are talking about an ethnic minority or others who are in an especially vulnerable position, and who may also be more likely to notice potential problem-areas.

Individual UX folks should not be the sole custodians of ethics nor of the commitment to a better user experience. Without buy-in at high levels of the company, neither of these are likely to work out well for anyone.

Who should be in charge of software ethics?

Who, then, should be the custodians of keeping software from causing harm?

The UXPA Professional Organization

The UXPA organization has a code of conduct, which is excellent. Unfortunately, it doesn’t really have much to do with the ethical concerns that have come up lately. At best, we have the lines “UX practitioners shall never knowingly use material that is illegal, immoral, or which may hurt or damage a person or group of people.” and “UX practitioners shall advise clients and employers when a proposed project is not in the client’s best interest and provide a rationale for this advice.” However, these are relevant to the problem at hand only if a UX practitioner can tell that something might cause harm, or if a client’s best interest matches up with the public’s best interest.

The code of conduct in question may not be specific enough, either: the main purpose of such a code of conduct is to offer practitioners a place to refer to when something goes against it. It is not clear that this code offers that opportunity, nor is it really a UX professional’s job to watch for ethics concerns. We may be best positioned, and we may be able to learn what to look for, but ethical concerns are only a part of the many tasks a UX professional may have.

Companies Themselves

A better question might be: how do we encourage companies adopt and stick to an ethics plan around digital products? Once something like that is in place, it becomes a _lot_ easier for your employees to take that into account. Knowing what to pay attention to, what areas to explore, and taking the time to do so would be a huge improvement.

Maybe instead of asking UX folks to be the custodians of ethics (also here), we can encourage companies to pay attention to this problem. UX folks could certainly work with and guide their companies when those companies are looking to be more ethically conscious.

I’m not at all certain what might get companies to pay attention to ethics, except possibly for things like the current investigation into the effects of Russian interference in our politics. When it’s no longer possible to hide the evil that one’s thoughtlessness — or one’s focus on money over morals — has caused, maybe that will finally get companies to implement and enforce clear, ethical guidelines.

What do you think?

What are your thoughts on how — or even if — ethics should be brought to the table around high tech?

Thank you to Alex Feinman and Emily Lawrence for their feedback on this entry!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-12-06 00:19:29

Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the Django Girls tutorial, which covers that and more.

This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account).

The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name.

Screenshot of Github, the menu to create a new repository is open and a new repo is being created with the name 'rsip22.github.io'

I recommend that you initialize your repository with a README, with a .gitignore for Python and with a free software license. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it.

Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine:

$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git

And change to the new directory:

$ cd YOUR_USERNAME.github.io

Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source":

$ git checkout -b source

Create the virtualenv with the Python3 version installed on your system.

On GNU/Linux systems, the command might go as:

$ python3 -m venv venv

or as

$ virtualenv --python=python3.5 venv

And activate it:

$ source venv/bin/activate

Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this:

(venv)$ pip install pelican markdown ghp-import

Once that is done, you can start creating your blog using pelican-quickstart:

(venv)$ pelican-quickstart

Which will prompt us a series of questions. Before answering them, take a look at my answers below:

> Where do you want to create your new web site? [.] ./
> What will be the title of this web site? Renata's blog
> Who will be the author of this web site? Renata
> What will be the default language of this web site? [pt] en
> Do you want to specify a URL prefix? e.g., http://example.com   (Y/n) n
> Do you want to enable article pagination? (Y/n) y
> How many articles per page do you want? [10] 10
> What is your time zone? [Europe/Paris] America/Sao_Paulo
> Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!**
> Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n
> Do you want to upload your website using FTP? (y/N) n
> Do you want to upload your website using SSH? (y/N) n
> Do you want to upload your website using Dropbox? (y/N) n
> Do you want to upload your website using S3? (y/N) n
> Do you want to upload your website using Rackspace Cloud Files? (y/N) n
> Do you want to upload your website using GitHub Pages? (y/N) y
> Is this your personal page (username.github.io)? (y/N) y
Done. Your new project is available at /home/username/YOUR_USERNAME.github.io

About the time zone, it should be specified as TZ Time zone (full list here: List of tz database time zones).

Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes

Title: My first post
Date: 2017-11-26 10:01
Modified: 2017-11-27 12:30
Category: misc
Tags: first, misc
Slug: My-first-post
Authors: Your name
Summary: What does your post talk about? Write here.

This is the *first post* from my Pelican blog. **YAY!**

Let's see how it looks?

Go to the terminal, generate the static files and start the server. To do that, use the following command:

(venv)$ make html && make serve

While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar.

Screenshot of the blog home. It has a header with the title Renata\'s blog, the first post on the left, info about the post on the right, links and social on the bottom.

Pretty neat, right?

Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes

STATIC_PATHS = ['images']

Save it. Go to your post and add the image this way:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes

![Write here a good description for people who can't see the image]({filename}/images/IMAGE_NAME.jpg)

You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how?

(venv)$ make html && make serve

One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes

FEED_ALL_ATOM = 'feeds/all.atom.xml'
FEED_ALL_RSS = 'feeds/all.rss.xml'
AUTHOR_FEED_RSS = 'feeds/%s.rss.xml'
RSS_FEED_SUMMARY_ONLY = False

Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password.

$ git add -A && git commit -a -m 'first commit' && git push --all

And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them:

$ make github

You will be asked for your Github login and password again. And... voilà! Your new blog should be live on https://YOUR_USERNAME.github.io.

If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community.

For tips on how to write your posts using Markdown, you should read the Daring Fireball Markdown guide.

To get other themes, I recommend you visit Pelican Themes.

This post was adapted from Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme. I hope it was somewhat useful for you.

Renata D'Avila | Renata's blog | 2017-12-05 22:30:00

This year WineConf was held in Wroclaw, Poland, October 28-29. I gave a presentation about my work on the AppDB on the second day of the conference.

My AppDB work focused on two broad areas, the admin controls and the test report form. The former is the most important to me, but is the least visible to others, so I enjoyed having the chance to show off some of the improvements I had made to that area.

The bulk of my talk focused on changes to the the test report form, including a demonstration of how the form now works and a discussion of my rationale for making the changes. I also talked a little about my future plans for that form, which includes making it more responsive using javascript

I ended the talk with a few stats from the new fields that I had added to the test report form:

Staging/Non-staging

June 5, 2017--October 23, 2017
 

Workarounds

August 16, 2017--October 23, 2017
 

GPU/Driver

August 25, 2017--October 23, 2017

 

The changes haven't been live for very long, so the data is limited. Next year, with a bigger sample to look at, I'll be able to do more detailed breakdowns, such as a breakdown of ratings across GPU/drivers. 





Rosanne DiMesio | Notes from an Internship | 2017-12-05 18:55:04

Distributed Computing in a nutshell: How distributed systems work

This post distills the material presented in the paper titled “A Note on Distributed Systems” published in 1994 by Jim Waldo and others.

The paper presents the differences between local and distributed computing in the context of Object Oriented Programming. It explains why treating them the same is incorrect and leads to applications that aren’t robust or reliable.

Introduction

The paper kicks off by stating that the current work in distributed systems is modeled around objects — more specifically, a unified view of objects. Objects are defined by their supported interfaces and the operations they support.

Naturally, this can be extended to imply that objects in the same address space, or in a different address space on the same machine, or on a different machine, all behave in a similar manner. Their location is an implementation detail.

Let’s define the most common terms in this paper:

Local Computing

It deals with programs that are confined to a single address space only.

Distributed Computing

It deals with programs that can make calls to objects in different address spaces either on the same machine or on a different machine.

The Vision of Unified Objects

Implicit in this vision is that the system will be “objects all the way down.” This means that all current invocations, or calls for system services, will eventually be converted into calls that might be made to an object residing on some other machine. There is a single paradigm of object use and communication used no matter what the location of the object might be.

This refers to the assumption that all objects are defined only in terms of their interfaces. Their implementation also includes location of the object, and is independent of their interfaces and hidden from the programmer.

As far the programmer is concerned, they write the same type of call for every object, whether local or remote. The system takes care of sending the message by figuring out the underlying mechanisms not visible to the programmer who is writing the application.

The hard problems in distributed computing are not the problems of how to get things on and off the wire.

The paper goes on to define the toughest challenges of building a distributed system:

  1. Latency
  2. Memory Access
  3. Partial failure and concurrency

Ensuring a reasonable performance while dealing with all the above doesn’t make the life of the a distributed systems engineer any easier. And the lack of any central resource or state manager adds to the various challenges. Let’s observe each of these one by one.

Latency

This is the fundamental difference between local and distributed object invocation.

The paper claims that a remote call is four to five times slower than a local call. If the design of a system fails to recognize this fundamental difference, it is bound to suffer from serious performance problems. Especially if it relies on remote communication.

You need to have a thorough understanding of the application being designed so you can decide which objects should be kept together and which can be placed remotely.

If the goal is to unify the difference in latency, then we’ve two options:

  • Rely on the hardware to get faster with time to eliminate the difference in efficiency
  • Develop tools which allow us to visualize communication patterns between different objects and move them around as required. Since location is an implementation detail, this shouldn’t be too hard to achieve

Memory

Another difference that’s very relevant to the design of distributed systems is the pattern of memory access between local and remote objects. A pointer in the local address space isn’t valid in a remote address space.

We’re left with two choices:

  • The developer must be made aware of the difference between the access patterns
  • To unify the differences in access between local and remote access, we need to let the system handle all aspects of access to memory.

There are several way to do that:

  • Distributed shared memory
  • Using the OOP (Object-oriented programming) paradigm, compose a system entirely of objects — one that deals only with object references.
    The transfer of data between address spaces can be dealt with by marshalling and unmarshalling the data by the layer underneath. This approach, however, makes the use of address-space-relative pointers obsolete.

The danger lies in promoting the myth that “remote access and local access are exactly the same.” We should not reinforce this myth. An underlying mechanism that does not unify all memory accesses while still promoting this myth is both misleading and prone to error.

It’s important for programmers to be made aware of the various differences between accessing local and remote objects. We don’t want them to get bitten by not knowing what’s happening under the covers.

Partial failure & concurrency

Partial failure is a central reality of distributed computing.

The paper argues that both local and distributed systems are subject to failure. But it’s harder to discover what went wrong in the case of distributed systems.

For a local system, either everything is shut down or there is some central authority which can detect what went wrong (the OS, for example).

Yet, in the case of a distributed system, there is no global state or resource manager available to keep track of everything happening in and across the system. So there is no way to inform other components which may be functioning correctly which ones have failed. Components in a distributed system fail independently.

A central problem in distributed computing is insuring that the state of the whole system is consistent after such a failure. This is a problem that simply does not occur in local computing.

For a system to withstand partial failure, it’s important that it deals with indeterminacy, and that the objects react to it in a consistent manner. The interfaces must be able to state the cause of failure, if possible. And then allow the reconstruction of a “reasonable state” in case the cause can’t be determined.

The question is not “can you make remote method invocation look like local method invocation,” but rather “what is the price of making remote method invocation identical to local method invocation?”

Two approaches come to mind:

  1. Treat all interfaces and objects as local. The problem with this approach is that it doesn’t take into account the failure models associated with distributed systems. Therefore, it’s indeterministic by nature.
  2. Treat all interfaces and objects as remote. The flaw with this approach is that it over-complicates local computing. It adds on a ton of work for objects that are never accessed remotely.

A better approach is to accept that there are irreconcilable differences between local and distributed computing, and to be conscious of those differences at all stages of the design and implementation of distributed applications.

P.S. — If you made it this far and would like to receive a mail whenever I publish one of these posts, sign up here.


A Note on Distributed Systems was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-04 17:16:43

This article will distill the contents of the academic paper Viewstamped Replication Revisited by Barbara Liskov and James Cowling. All quotations are taken from that paper.

It presents an updated explanation of Viewstamped Replication, a replication technique that handles failures in which nodes crash. It describes how client requests are handled, how the group reorganizes when a replica fails, and how a failed replica is able to rejoin the group.

Introduction

The Viewstamped Replication protocol, referred to as VR, is used for replicated services that run on many nodes known as replicas. VR uses state machine replication: it maintains state and makes it accessible to the clients consuming that service.

Some features of VR:

  • VR is primarily a replication protocol, but it provides consensus too.
  • VR doesn’t use any disk I/O — it uses replicated state for persistence.
  • VR deals only with crash failures: a node is either functioning or it completely stops.
  • VR works in an asynchronous network like the internet where nothing can be concluded about a message that doesn’t arrive. It may be lost, delivered out of order, or delivered many times.

Replica Groups

VR ensures reliability and availability when no more than a threshold of f replicas are faulty. It does this by using replica groups of size 2f + 1; this is the minimal number of replicas in an asynchronous network under the crash failure model.

We can provide a simple proof for the above statement: in a system with f crashed nodes, we need at least the majority of f+1 nodes that can mutually agree to keep the system functioning.

A group of f+1 replicas is often known as a quorum. The protocol needs the quorum intersection property to be true to work correctly. This property states that:

The quorum of replicas that processes a particular step of the protocol must have a non-empty intersection with the group of replicas available to handle the next step, since this way we can ensure that at each next step at least one participant knows what happened in the previous step.

Architecture:

VR architecture

The architecture of VR is as follows:

  1. The user code is run on client machines on top of a VR proxy.
  2. The proxy communicates with the replicas to carry out the operations requested by the client. It returns the computed results from the replicas back to the client.
  3. The VR code on the side of the replicas accepts client requests from the proxy, executes the protocol, and executes the request by making an up-call to the service code.
  4. The service code returns the result to the VR code which in turn sends a message to the client proxy that requested the operation.

Overview

The challenge for the replication protocol is to ensure that operations execute in the same order at all replicas in spite of concurrent requests from clients and in spite of failures.

If all the replicas should end in the same state, it is important that the above condition is met.

VR deals with the replicas as follows:

Primary: Decides the order in which the operations will be executed

Secondary: Carries out the operations in the same order as selected by the primary

What if the primary fails?

  • VR allows different replicas to assume the role of primary if it fails over time.
  • The system moves through a series of views. In each view, one replica assumes the role of primary.
  • The other replicas watch the primary. If it appears to be faulty, then they carry out a view-change to select a new primary.

We consider the following three scenarios of the VR protocol:

  • Normal case processing of user requests
  • View changes to select a new primary
  • Recovery of a failed replica so that it can rejoin the group

VR protocol

State of VR at a replica

The state maintained by each replica is presented in the figure above. Some points to note:

  • The identity of the primary isn’t stored but computed using the view number and the configuration.
  • The replica with the smallest IP is replica 1 and so on.

The client side proxy also maintains some state:

  • It records the configuration.
  • It records the current view number to track the primary.
  • It has a client id and an incrementing client request number.

Normal Operation

  • Replicas participate in processing of client requests only when their status is normal.
  • Each message sent contains the sender’s view number. Replicas process only those requests which have a view number that matches what they know. If the sender replica is ahead, it drops the message. If it’s behind, it performs a state transfer.
Normal mode operation

The normal operation of VR can be broken down into the following steps:

  1. The client sends a REQUEST message to the primary asking it to perform some operation, passing it the client-id and the request number.
  2. The primary cross-checks the info present in the client table. If the request number is smaller than the one present in the table, it discards it. It re-sends the response if the request was the most recently executed one.
  3. The primary increases the op-number, appends the request to its log, and updates the client table with the new request number. It sends a PREPARE message to the replicas with the current view-number, the operation-number, the client’s message, and the commit-number (the operation number of the most recently committed operation).
  4. The replicas won’t accept a message with an op-number until they have all operations preceding it. They use state transfer to catch up if required. Then they add the operation to their log, update the client table, and send a PREPAREOK message to the primary. This message indicates that the operation, including all the preceding ones, has been prepared successfully.
  5. The primary waits for a response from f replicas before committing the operation. It increments the commit-number. After making sure all operations preceding the current one have been executed, it makes an up-call to the service code to execute the current operation. A REPLY message is sent to the client containing the view-number, request-number, and the result of the up-call.

Usually the PREPARE message is used to inform the backup replicas of the committed operations. It can also do so by sending a COMMIT message.

To execute a request, a backup has to make sure that the operation is present in its log and that all the previous operations have been executed. Then it executes the said operation, increments its commit-number, and updates the client’s entry in the client-table. But it doesn’t send a reply to the client, as the primary has already done that.

If a client doesn’t receive a timely response to a request, it re-sends the request to all replicas. This way if the group has moved to a later view, its message will reach the new primary. Backups ignore client requests; only the primary processes them.

View change operation

Backups monitor the primary: they expect to hear from it regularly. Normally the primary is sending PREPARE messages, but if it is idle (due to no requests) it sends COMMIT messages instead. If a timeout expires without a communication from the primary, the replicas carry out a view change to switch to a new primary.

There is no leader election in this protocol. The primary is selected in a round robin fashion. Each member has a unique IP address. The next primary is the backup replica with the smallest IP that is functioning. Each number in the group is already aware of who is expected to be the next primary.

Every executed operation at the replicas must survive the view change in the order specified when it was executed. The up-call is carried out at the primary only after it receives f PREPAREOK messages. Thus the operation has been recorded in the logs of at least f+1 replicas (the old primary and f replicas).

Therefore the view change protocol obtains information from the logs of at least f + 1 replicas. This is sufficient to ensure that all committed operations will be known, since each must be recorded in at least one of these logs; here we are relying on the quorum intersection property. Operations that had not committed might also survive, but this is not a problem: it is beneficial to have as many operations survive as possible.
  1. A replica that notices the need for a view change advances its view-number, sets its status to view-change, and sends a START-VIEW-CHANGE message. A replica identifies the need for a view change based on its own timer, or because it receives a START-VIEW-CHANGE or a DO-VIEW-CHANGE from others with a view-number higher than its own.
  2. When a replica receives f START-VIEW-CHANGE messages for its view-number, it sends a DO-VIEW-CHANGE to the node expected to be the primary. The messages contain the state of the replica: the log, most recent operation-number and commit-number, and the number of the last view in which its status was normal.
  3. The new primary waits to receive f+1 DO-VIEW-CHANGE messages from the replicas (including itself). Then it updates its state to the most recent based on the info from replicas (see paper for all rules). It sets its number as the view-number in the messages, and changes its status to normal. It informs all other replicas by sending a STARTVIEW message with the most recent state including the new log, commit-number and op-number.
  4. The primary can now accept client requests. It executes any committed operations and sends the replies to clients.
  5. When the replicas receive a STARTVIEW message, they update their state based on the message. They send PREPAREOK messages for all uncommitted operations present in their log after the update. They execute these operations to to be in sync with the primary.

To make the view change operation more efficient, the paper describes the following approach:

The protocol described has a small number of steps, but big messages. We can make these messages smaller, but if we do, there is always a chance that more messages will be required. A reasonable way to get good behavior most of the time is for replicas to include a suffix of their log in their DO-VIEW-CHANGE messages. The amount sent can be small since the most likely case is that the new primary is up to date. Therefore sending the latest log entry, or perhaps the latest two entries, should be sufficient. Occasionally, this information won’t be enough; in this case the primary can ask for more information, and it might even need to first use application state to bring itself up to date.

Recovery

When a replica recovers after a crash it cannot participate in request processing and view changes until it has a state at least as recent as when it failed. If it could participate sooner than this, the system can fail.

The replica should not “forget” anything it has already done. One way to ensure this is to persist the state on disk — but this will slow down the whole system. This isn’t necessary in VR because the state is persisted at other replicas. It can be obtained by using a recovery protocol provided that the replicas are failure independent.

When a node comes back up after a crash it sets its status to recovering and carries out the recovery protocol. While a replica’s status is recovering it does not participate in either the request processing protocol or the view change protocol.

The recovery protocol is as follows:

  1. The recovering replica sends a RECOVERY message to all other replicas with a nonce.
  2. Only if the replica’s status is normal does it reply to the recovering replica with a RECOVERY-RESPONSE message. This message contains its view number and the nonce it received. If it’s the primary, it also sends its log, op-number, and commit-number.
  3. When the replica has received f+1 RECOVERY-RESPONSE messages, including one from the primary, it updates its state and changes its status to normal.
The protocol uses the nonce to ensure that the recovering replica accepts only RECOVERY-RESPONSE messages that are for this recovery and not an earlier one.

Reconfiguration

Reconfiguration deals with epochs. The epoch represents the group of replicas processing client requests. If the threshold for failures, f, is adjusted, the system can either add or remove replicas and transition to a new epoch. It keeps track of epochs through the epoch-number.

Another status, namely transitioning, is used to signify that a system is moving between epochs.

The approach to handling reconfiguration is as follows. A reconfiguration is triggered by a special client request. This request is run through the normal case protocol by the old group. When the request commits, the system moves to a new epoch, in which responsibility for processing client requests shifts to the new group. However, the new group cannot process client requests until its replicas are up to date: the new replicas must know all operations that committed in the previous epoch. To get up to date they transfer state from the old replicas, which do not shut down until the state transfer is complete.

The VR sub protocols need to be modified to deal with epochs. A replica doesn’t accept messages from an older epoch compared to what it knows, such as those with an older epoch-number. It informs the sender about the new epoch.

During a view-change, the primary cannot accept client requests when the system is transitioning between epochs. It does this by checking if the topmost request in its log is a RECONFIGURATION request. A recovering replica in an older epoch is informed of the epoch if it is part of the new epoch or if it shuts down.

The issue that comes to mind is that the client requests can’t be served while the system is moving to a new epoch.

The old group stops accepting client requests the moment the primary of the old group receives the RECONFIGURATION request; the new group can start processing client requests only when at least f + 1 new replicas have completed state transfer.

This can be dealt with by “warming up” the nodes before reconfiguration happens. The nodes can be brought up-to-date using state transfer while the old group continues to reply to client requests. This reduces the delay caused during reconfiguration.

This paper has presented an improved version of Viewstamped Replication, a protocol used to build replicated systems that are able to tolerate crash failures. The protocol does not require any disk writes as client requests are processed or even during view changes, yet it allows nodes to recover from failures and rejoin the group.

The paper also presents a protocol to allow for reconfigurations that change the members of the replica group, and even the failure threshold. A reconfiguration technique is necessary for the protocol to be deployed in practice since the systems of interest are typically long lived.

If you enjoyed this essay, please hit the clap button so more people see it. Thank you!

P.S. — If you made it this far and would like to receive a mail whenever I publish one of these posts, sign up here.


Want to learn how Viewstamped Replication works? Read this summary. was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-04 17:16:36

This article presents a summary of the paper “Harvest, Yield, and Scalable Tolerant Systems” published by Eric Brewer & Amando Fox in 1999. All unattributed quotes are from this paper.

The paper deals with the trade-offs between consistency and availability (CAP) for large systems. It’s very easy to point to CAP and assert that no system can have consistency and availability.

But, there is a catch. CAP has been misunderstood in a variety of ways. As Coda Hale explains in his excellent blog post “You Can’t Sacrifice Partition Tolerance”:

Of the CAP theorem’s Consistency, Availability, and Partition Tolerance, Partition Tolerance is mandatory in distributed systems. You cannot not choose it. Instead of CAP, you should think about your availability in terms of yield (percent of requests answered successfully) and harvest (percent of required data actually included in the responses) and which of these two your system will sacrifice when failures happen.

The paper focuses on increasing the availability of large scale systems by fault toleration, containment and isolation:

We assume that clients make queries to servers, in which case there are at least two metrics for correct behavior: yield, which is the probability of completing a request, and harvest, which measures the fraction of the data reflected in the response, i.e. the completeness of the answer to the query.

The two metrics, harvest and yield can be summarized as follows:

  • Harvest: data in response/total data
    For example: If one of the nodes is down in a 100 node cluster, the harvest is 99% for the duration of the fault.
  • Yield: requests completed with success/total number of requests
    Note: Yield is different from uptime. Yield deals with the number of requests, not only the time the system wasn’t able to respond to requests.

The paper argues that there are certain systems which require perfect responses to queries every single time. Also, there are systems that can tolerate imperfect answers once in a while.

To increase the overall availability of our systems, we need to carefully think through the required consistency and availability guarantees it needs to provide.

Trading Harvest for Yield — Probabilistic Availability

Nearly all systems are probabilistic whether they realize it or not. In particular, any system that is 100% available under single faults is probabilistically available overall (since there is a non-zero probability of multiple failures)

The paper talks about understanding the probabilistic nature of availability. This helps in understanding and limiting the impact of faults by making decisions about what needs to be available and what kind of faults the system can deal with.

They outline the linear degradation of harvest in case of multiple node faults. The harvest is directly proportional to the number of nodes that are functioning correctly. Therefore, it decreases/increases linearly.

Two strategies are suggested for increasing the yield:

  1. Random distribution of data on the nodes
    If one of the nodes goes down, the average-case and worst-case fault behavior doesn’t change. Yet if the distribution isn’t random, then depending on the type of data, the impact of a fault may vary.
    For example, if only one of the nodes stored information related to a user’s account balance goes down, the entire banking system will not be able to work.
  2. Replicating the most important data
    This reduces the impact in case one of the nodes containing a subset of high-priority data goes down.
    It also improves harvest.

Another notable observation made in the paper is that it is possible to replicate all your data. It doesn’t do a lot to improve your harvest/yield, but it increases the cost of operation substantially. This is because the internet works based on best-in-effort protocols which can never guarantee 100% harvest/yield.

Application Decomposition and Orthogonal Mechanisms

The second strategy focuses on the benefits of orthogonal system design.

It starts out by stating that large systems are composed of subsystems which cannot tolerate failures. But they fail in a way that allows the entire system to continue functioning with some impact on utility.

The actual benefit is the ability to provision each subsystem’s state management separately, providing strong consistency or persistent state only for the subsystems that need it, not for the entire application. The savings can be significant if only a few small subsystems require the extra complexity.

The paper states that orthogonal components are completely independent of each other. They have no run time interface to other components, unless there is a configuration interface. This allows each individual component to fail independently and minimizes its impact on the overall system.

Composition of orthogonal subsystems shifts the burden of checking for possibly harmful interactions from runtime to compile time, and deployment of orthogonal guard mechanisms improves robustness for the runtime interactions that do occur, by providing improved fault containment.

The goal of this paper was to motivate research in the field of designing fault-tolerant and highly available large scale systems.

Also, to think carefully about the consistency and availability guarantees the application needs to provide. As well as the trade offs it is capable of making in terms of harvest against yield.

If you enjoyed this paper, please hit the clap button so more people see it. Thank you.

P.S. — If you made it this far and would like to receive a mail whenever I publish one of these posts, sign up here.


Harvest, Yield, and Scalable Tolerant Systems: A Summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-04 17:13:54

This post summarizes the Raft consensus algorithm presented in the paper In Search of An Understandable Consensus Algorithm by Diego Ongaro and John Ousterhout. All pull quotes are taken from that paper.

Credit

Raft:

Raft is a distributed consensus algorithm. It was designed to be easily understood. It solves the problem of getting multiple servers to agree on a shared state even in the face of failures. The shared status is usually a data structure supported by a replicated log. We need the system to be fully operational as long as a majority of the servers are up.

Raft works by electing a leader in the cluster. The leader is responsible for accepting client requests and managing the replication of the log to other servers. The data flows only in one direction: from leader to other servers.

Raft decomposes consensus into three sub-problems:

  • Leader Election: A new leader needs to be elected in case of the failure of an existing one.
  • Log replication: The leader needs to keep the logs of all servers in sync with its own through replication.
  • Safety: If one of the servers has committed a log entry at a particular index, no other server can apply a different log entry for that index.
Raft ensures these properties are true at all times.

Basics:

Each server exists in one of the three states: leader, follower, or candidate.

State changes of servers
In normal operation there is exactly one leader and all of the other servers are followers. Followers are passive: they issue no requests on their own but simply respond to requests from leaders and candidates. The leader handles all client requests (if a client contacts a follower, the follower redirects it to the leader). The third state, candidate, is used to elect a new leader.

Raft divides time into terms of arbitrary length, each beginning with an election. If a candidate wins the election, it remains the leader for the rest of the term. If the vote is split, then that term ends without a leader.

The term number increases monotonically. Each server stores the current term number which is also exchanged in every communication.

.. if one server’s current term is smaller than the other’s, then it updates its current term to the larger value. If a candidate or leader discovers that its term is out of date, it immediately reverts to follower state. If a server receives a request with a stale term number, it rejects the request.

Raft makes use of two remote procedure calls (RPCs) to carry out its basic operation.

  • RequestVotes is used by candidates during elections
  • AppendEntries is used by leaders for replicating log entries and also as a heartbeat (a signal to check if a server is up or not — it doesn’t contain any log entries)

Leader election

The leader periodically sends a heartbeat to its followers to maintain authority. A leader election is triggered when a follower times out after waiting for a heartbeat from the leader. This follower transitions to the candidate state and increments its term number. After voting for itself, it issues RequestVotes RPC in parallel to others in the cluster. Three outcomes are possible:

  1. The candidate receives votes from the majority of the servers and becomes the leader. It then sends a heartbeat message to others in the cluster to establish authority.
  2. If other candidates receive AppendEntries RPC, they check for the term number. If the term number is greater than their own, they accept the server as the leader and return to follower state. If the term number is smaller, they reject the RPC and still remain a candidate.
  3. The candidate neither loses nor wins. If more than one server becomes a candidate at the same time, the vote can be split with no clear majority. In this case a new election begins after one of the candidates times out.
Raft uses randomized election timeouts to ensure that split votes are rare and that they are resolved quickly. To prevent split votes in the first place, election timeouts are chosen randomly from a fixed interval (e.g., 150–300ms). This spreads out the servers so that in most cases only a single server will time out; it wins the election and sends heartbeats before any other servers time out. The same mechanism is used to handle split votes. Each candidate restarts its randomized election timeout at the start of an election, and it waits for that timeout to elapse before starting the next election; this reduces the likelihood of another split vote in the new election.

Log Replication:

The client requests are assumed to be write-only for now. Each request consists of a command to be executed ideally by the replicated state machines of all the servers. When a leader gets a client request, it adds it to its own log as a new entry. Each entry in a log:

  • Contains the client specified command
  • Has an index to identify the position of entry in the log (the index starts from 1)
  • Has a term number to logically identify when the entry was written

It needs to replicate the entry to all the follower nodes in order to keep the logs consistent. The leader issues AppendEntries RPCs to all other servers in parallel. The leader retries this until all followers safely replicate the new entry.

When the entry is replicated to a majority of servers by the leader that created it, it is considered committed. All the previous entries, including those created by earlier leaders, are also considered committed. The leader executes the entry once it is committed and returns the result to the client.

The leader maintains the highest index it knows to be committed in its log and sends it out with the AppendEntries RPCs to its followers. Once the followers find out that the entry has been committed, it applies the entry to its state machine in order.

Raft maintains the following properties, which together constitute the Log Matching Property
• If two entries in different logs have the same index and term, then they store the same command.
• If two entries in different logs have the same index and term, then the logs are identical in all preceding entries.

When sending an AppendEntries RPC, the leader includes the term number and index of the entry that immediately precedes the new entry. If the follower cannot find a match for this entry in its own log, it rejects the request to append the new entry.

This consistency check lets the leader conclude that whenever AppendEntries returns successfully from a follower, they have identical logs until the index included in the RPC.

But the logs of leaders and followers may become inconsistent in the face of leader crashes.

In Raft, the leader handles inconsistencies by forcing the followers’ logs to duplicate its own. This means that conflicting entries in follower logs will be overwritten with entries from the leader’s log.

The leader tries to find the last index where its log matches that of the follower, deletes extra entries if any, and adds the new ones.

The leader maintains a nextIndex for each follower, which is the index of the next log entry the leader will send to that follower. When a leader first comes to power, it initializes all nextIndex values to the index just after the last one in its log.

Whenever AppendRPC returns with a failure for a follower, the leader decrements the nextIndex and issues another AppendEntries RPC. Eventually, nextIndex will reach a value where the logs converge. AppendEntries will succeed when this happens and it can remove extraneous entries (if any) and add new ones from the leaders log (if any). Hence, a successful AppendEntries from a follower guarantees that the leader’s log is consistent with it.

With this mechanism, a leader does not need to take any special actions to restore log consistency when it comes to power. It just begins normal operation, and the logs automatically converge in response to failures of the Append-Entries consistency check. A leader never overwrites or deletes entries in its own log.

Safety:

Raft makes sure that the leader for a term has committed entries from all previous terms in its log. This is needed to ensure that all logs are consistent and the state machines execute the same set of commands.

During a leader election, the RequestVote RPC includes information about the candidate’s log. If the voter finds that its log it more up-to-date that the candidate, it doesn’t vote for it.

Raft determines which of two logs is more up-to-date by comparing the index and term of the last entries in the logs. If the logs have last entries with different terms, then the log with the later term is more up-to-date. If the logs end with the same term, then whichever log is longer is more up-to-date.

Cluster membership:

For the configuration change mechanism to be safe, there must be no point during the transition where it is possible for two leaders to be elected for the same term. Unfortunately, any approach where servers switch directly from the old configuration to the new configuration is unsafe.

Raft uses a two-phase approach for altering cluster membership. First, it switches to an intermediate configuration called joint consensus. Then, once that is committed, it switches over to the new configuration.

The joint consensus allows individual servers to transition between configurations at different times without compromising safety. Furthermore, joint consensus allows the cluster to continue servicing client requests throughout the configuration change.

Joint consensus combines the new and old configurations as follows:

  • Log entries are replicated to all servers in both the configurations
  • Any server from old or new can become the leader
  • Agreement requires separate majorities from both old and new configurations

When a leader receives a configuration change message, it stores and replicates the entry for join consensus C<old, new>. A server always uses the latest configuration in its log to make decisions even if it isn’t committed. When joint consensus is committed, only servers with C<old, new> in their logs can become leaders.

It is now safe for the leader to create a log entry describing C<new> and replicate it to the cluster. Again, this configuration will take effect on each server as soon as it is seen. When the new configuration has been committed under the rules of C<new>, the old configuration is irrelevant and servers not in the new configuration can be shut down.

A fantastic visualization of how Raft works can be found here.

More material such as talks, presentations, related papers and open-source implementations can be found here.

I have dug only into the details of the basic algorithm that make up Raft and the safety guarantees it provides. The paper contains lot more details and it is super approachable as the primary goal of the authors was understandability. I definitely recommend you read it even if you’ve never read any other paper before.

If you enjoyed this article, please hit the clap button below so more people see it. Thank you.

P.S. — If you made it this far and would like to receive a mail whenever I publish one of these posts, sign up here.


Understanding the Raft consensus algorithm: an academic article summary was originally published in freeCodeCamp on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shubheksha Jalan | Stories by Shubheksha on Medium | 2017-12-04 17:12:44

That moment when you try to google haskell-yesod for more tutorials, stumble upon a something called Kabbalah, thinking "wow, that's such a good naming, the words fit really good together, with their Hebrew origin" and then realize it's not some piece of software, it's actual Kabbalah.


Asal Mirzaieva | code. sleep. eat. repeat | 2017-12-04 02:32:36

Well hello

So, I started to learn web programming with haskell and yesod. Yesod book was too hard for me to grasp and I couldn't find a plausible entry-level tutorial, that would not be written 5 years ago and could compile. So I took an article by  yannesposito and fixed it.

I saw some effort to fix the tutorial on the school of haskell here, but its formatting gave me an impression, that it is not maintained anymore.

Prerequisites: a basic understanding of haskell. If you lack it, I recommend you to read this.

1. Work environment setup


I use stack instead of cabal. I, together with yesod manual, really recommend you to use it. Not to mention, that using yesod you don't really have a choice. The only way to create a new project is to use stack  ¯\_(ツ)_/¯.

1.1 So, let's get us some stack!


It's as easy as running either one of the two.
curl -sSL https://get.haskellstack.org/ | sh
or
wget -qO- https://get.haskellstack.org/ | sh

I strongly recommend using the latest version of stack instead of just apt-getting it. Ubuntu repos often contain older and buggier versions of our favorite software.

[optional] Check what templates are available
$ stack templates

1.2 Generate a template project.


You can generate a yesod project only using stack. The init command has been removed from yesod.  Use yesod-sqlite template to store you blog entries (see "Blog" chapter). Of course, if you don't intend to go that far with this tutorial, you can use yesod-simple. So, let's create a new project called "yolo" with type yesod-sqlite.
stack new yolo yesod-sqlite

1.3 Install yesod

You should be able to run your  project, for this you have to install yesod. This takes about 20 min.
stack install yesod-bin --install-ghc

1.4 Build and launch

Warning, first build will take a looong time
stack build && stack exec -- yesod devel

And check your new website at http://localhost:3000/
(3000 is the default port for yesod).
For more detailed reference about setting up yesod look here.

My versions of stuff:
stack: Version 1.5.1, Git revision 600c1f01435a10d127938709556c1682ecfd694e
yesod-bin version: 1.5.2.6
The Glorious Glasgow Haskell Compilation System, version 8.0.2


2. Git setup

You know it's easier to live with a version control system.
git init .
git add .
git commit -m 'Initial commit'

3. Echo

Goal: going to localhost:3000/echo/word should generate a page with the same word.

Don't add the handler with `yesod add-handler`, instead, do it manually.

Add this to config/routes, thus adding a new page to the website.
/echo/#String EchoR GET
#String is the type of the input after slash and haskell's strict types prevent us from getting SQL injections, for example.
EchoR is the name of the GET request handler, GET is the type of supported requests.

And this is the handler, add it to src/Handler/Home.hs.
getEchoR :: String -> Handler Html
getEchoR theText = do
defaultLayout $ do
setTitle "My brilliant echo page!"
$(widgetFile "echo")

This tiny piece of code accomplishes a very simple task:
  • theText is the argument, that we passed through /echo/<theText is here>
  • for it we return a defaultLayout (that is specified in templates/defaultLayout.hamlet and is just a standart blank html page)
  • set page's title "My brilliant echo page!"
  • set main widget according to templates/echo.hamlet
 Also, remember that RepHtml is deprecated.

So, let's add this echo.hamlet to the <projectroot>/templates! As you can see it's just a header with the text that we passed after slash of echo/<word here>.
<h1> #{theText}

Now run and check localhost:3000/ :)

If you're getting an error like this
 Illegal view pattern:  fromPathPiece -> Just dyn_apb9
Use ViewPatterns to enable view patterns
or
 Illegal view pattern:  fromPathPiece -> Just dyn_aFon 

then just open your package.yaml file, that stack has automatically created for you and add the following lines just after `dependencies:` section:
default-extensions: ViewPatterns

Else if you're getting something like this
yesod: devel port unavailable
CallStack (from HasCallStack):
  error, called at ./Devel.hs:270:44 in main:Devel
that most probably you have another instance of site running and thus port 3000 is unavailable.

If you see this warning
Warning: Instead of 'ghc-options: -XViewPatterns -XViewPatterns' use
'extensions: ViewPatterns ViewPatterns'

It's okay, so far stack does not support 'extensions' section in .cabal file. Catch up with this topic in this thread.

If you see this warning
Foundation.hs:150:5: warning: [-Wincomplete-patterns]
    Pattern match(es) are non-exhaustive
    In an equation for ‘isAuthorized’:
        Patterns not matched: (EchoR _) _

That means, that you need to add this line to Foundation.hs
isAuthorized (EchoR _) _ = return Authorized 
All it does is grants permissions to access localhost/echo to everybody.

 4. Mirror

Goal: create a page /mirror with an input field, which will post the actual word and its palindrome glued together, as in book -> bookkoob or bo -> boob.

Add the following to config/routes to create a new route (i. e. a page in our case).
/mirror MirrorR GET POST


Now we just need to add a handler to src/Handler/Mirror.hs
    import Import
import qualified Data.Text as T

getMirrorR :: Handler RepHtml
getMirrorR = do
defaultLayout $ do
setTitle "You kek"
$(widgetFile "mirror")

postMirrorR :: Handler RepHtml
postMirrorR = do
postedText <- runInputPost $ ireq textField "content"
defaultLayout $ ($(widgetFile "posted"))

 Don't be overwhelmed! It's quite easy to understand.

And add the handler import to src/Application.hs, you will see a section, where all other handlers are imported

import Handler.Mirror

Mirror.hs mentions two widget files: 'mirror' and 'posted', here's their contents

templates/mirror.hamlet

<h1> Enter your text
<form method=post action=@{MirrorR}>
<input type=text name=content>
<input type=submit>

templates/posted.hamlet
<h1>You've just posted
<p>#{postedText}#{T.reverse postedText}
<hr>
<p><a href=@{MirrorR}>Get back

There is no need to add anything to .cabal or .yaml files, because stack magically deducts everything on its own :)

Don't forget to add the new route to isAuthorized like in the previous example!

Now build, launch and check out your localhost:3000, you must see something similar to my pics

stack build && stack exec -- yesod devel

And after you entered some text in the form, you should get something like this

5. Blog

Again, add Handler.Article and Handler.Blog to Application.hs imports.
This is contents of Blog.hs

{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE TypeFamilies #-}

module Handler.Blog
( getBlogR
, postBlogR
, YesodNic
, nicHtmlField
)
where

import Import
import Data.Monoid

import Yesod.Form.Nic (YesodNic, nicHtmlField)
instance YesodNic App

entryForm :: Form Article
entryForm = renderDivs $ Article
<$> areq textField "Title" Nothing
<*> areq nicHtmlField "Content" Nothing

getBlogR :: Handler RepHtml
getBlogR = do
articles <- runDB $ selectList [] [Desc ArticleTitle]
(articleWidget, enctype) <- generateFormPost entryForm
defaultLayout $ do
setTitle "kek"
$(widgetFile "articles")

postBlogR :: Handler Html
postBlogR = do
((res, articleWidget), enctype) <- runFormPost entryForm
case res of
FormSuccess article -> do
articleId <- runDB $ insert article
setMessage $ toHtml $ (articleTitle article) Import.<> " created"
redirect $ ArticleR articleId
_ -> defaultLayout $ do
setTitle "You loose, sucker!"
$(widgetFile "articleAddError")


Article.hs contents

{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE TypeFamilies #-}

module Handler.Article
(
getArticleR
)
where

import Import

getArticleR :: ArticleId -> Handler Html
getArticleR articleId = do
article <- runDB $ get404 articleId
defaultLayout $ do
setTitle $ toHtml $ articleTitle article
$(widgetFile "article")

Add this to conf/models

Article
title Text
content Html
deriving

This to conf/routes

/blog               BlogR       GET POST
/blog/#ArticleId ArticleR GET


And add these lines to src/Foundation.hs. This is a hack, but you cannot view the contents unauthorized, right? :) Drawback: all users on the internets will be able to see your post.

    isAuthorized BlogR _ = return Authorized
isAuthorized (ArticleR _) _ = return Authorized

All done! What will you see:

I’ve been trying to wrap my mind around the fonts section of the “Thinking With Type” book.

I started by hunting for family trees for common font families. Failing to find those — likely because there’s an astonishing number of fonts out there — I started doodling around trying to get something on paper for myself.

Without further ado, here’s my best approximation of the information in the section I’ve read, with some space available for further exploration. Mostly, I think I’m baffled by how one selects a font or font family, in part due to the sheer number of fonts out there, and in part because some require money. I’ll start out playing with with google fonts, because those seem to be specific for the web, and free. Open Sans seems to be a decent default, and Patternfly uses it.

Font Categories

“Thinking With Type” starts out by explaining the history behind fonts, and structures things by that history.

Humanist (or Roman) fonts include what were originally the gothic and italic typefaces — these came from hand-written, script and body-based styles. These relied upon calligraphy and the movements of the hand.

Enlightenment fonts were based on engraving techniques and lead type, and allowed for more flexibility in what was possible. This included both Transitional and Modern typefaces, which began the process of separating and modifying pieces of a letterform. Transitional started with Baskerville’s sharper serifs and more vertical axes. Modern went to an extreme with this, with Bodoni and Didot’s thin, straight serifs, vertical axes, and sharp contrast between thick and thin lines.

Abstract fonts went even further in the direction of exaggerating the pieces of a letterform, in part because of the additional options available with industrialization and wood-cut type.

Reform and Revolution were a reaction to the abstract period, in which font makers returned to their more humanist roots.

Computer-optimized fonts were created to handle the low resolution available with CRT screens and low resolution printers.

With the advent of purely digital fonts, creators of fonts started playing with imperfect type. Others created font workhorses using flexible palettes.

This is probably better named Font History!

Humanist Fonts

Humanist fonts were based on handwriting samples.

Gothic fonts were based on German writing, such as that of Gutenberg:

https://www.myfonts.com/fonts/alterlittera/gutenberg-a/

Whereas the Italic fonts were based on Italian cursive writing:

https://en.wikipedia.org/wiki/Italic_type

These were combined by Nicolas Jenson in 1465 into the first Roman typeface, from which many typefaces sprung:

https://en.wikipedia.org/wiki/Nicolas_Jenson
I don’t have much about the ones after Jenson.

Enlightenment Fonts

With the Enlightenment period came experimentation.

From the committee-designed romain du roi typeface, which was entirely created on a grid:

http://ilovetypography.com/2008/01/17/type-terms-transitional-type/

To the high contrast between the thick and thin elements from Baskerville, no longer strongly attached to calligraphy (the point at which you enter the Transitional period for fonts):

http://ilovetypography.com/2008/01/17/type-terms-transitional-type/
https://en.wikipedia.org/wiki/Baskerville

The Modern fonts from Bodoni and Didot further increased the contrast between thick and thin elements beyond Baskerville’s font.

https://en.wikipedia.org/wiki/Bodoni and https://en.wikipedia.org/wiki/Didot_(typeface)

Abstraction Fonts

In the abstraction period, the so-called Egyptian or Fat Face (now known as slab serifs) fonts came about. These were the first attempts at making type serve another function than long lines of book text, that of advertizing — otherwise known as display fontfaces.

These took the extremes of the Enlightenment period and went to extremes with them, making fonts whose thin lines were barely there, and whose thick lines were enormous.

Egyptian, or Slab Serif, from http://ilovetypography.com/2008/06/20/a-brief-history-of-type-part-5/
Fat Face, from http://ilovetypography.com/2008/06/20/a-brief-history-of-type-part-5/

Reform and Revolution Fonts

Font makers in the reform period reacted to the excesses of the abstraction period by returning to their historic roots.

Johnston (1906) used more traditional letterform styles of the Humanist period, although without serifs:

https://en.wikipedia.org/wiki/Johnston_(typeface)

The Revolution period, on the other hand, continued experimenting with what type could do.

The De Stijl movement in particular explored the idea of the alphabet (and other forms or art) as entirely comprised of perpendicular elements:

Doesburg (1717), https://zaidadi.wordpress.com/2011/03/09/de-stijl-in-general/
Forgive the bright pink aspect of this. It’s my lighting!

Computer-Optimized Fonts

The low resolution of early monitors and printers meant that fonts needed to be composed entirely of straight lines to display well.

Wim Crouwel created the New Alphabet (1967) font type for CRT monitors:

http://luc.devroye.org/fonts-24196.html

Zuzana Licko and Rudy VanderLans created the type foundry Emigre, which includes Licko’s Lo-Res (1985) font:

https://www.myfonts.com/person/Zuzana_Licko/

Matthew Carter created the first web fonts in 1996 for Microsoft, Verdana (sans serif) and Georgia (serif):

From Wikipedia, https://en.wikipedia.org/wiki/Verdana and https://en.wikipedia.org/wiki/Georgia_(typeface)

Imperfect Type

With the freedom from the physicality of the medium (such as lead type or wood type) that came with computers, some font designers began experimenting with imperfect types.

Deck made Template Gothic (1990), which looks like it had been stencilled:

https://en.wikipedia.org/wiki/Template_Gothic

Makela made the Dead History (1990) font using vector manipulation of the existing fonts Centennial and VAG Rounded:

https://www.emigre.com/Fonts/Dead-History

And Rossum and Blokland made Beowulf (1990) by changing the programming of PostScript fonts to randomize the locations of points in letters:

https://www.fontfont.com/fonts/beowolf

Workhorse Fonts

Also during the 1990s, some folks were working on fonts that were uncomplicated and functional. Licko’s Eaves pair, with their small X-heights, are good for use in larger sizes:

https://www.emigre.com/Fonts/Mrs-Eaves (1990) and https://www.emigre.com/Fonts/Mr-Eaves-Sans-and-Modern (2009)

Smeijer’s Quadraat (1992) started as a small serif font, with various weights and alternatives (sans and sans condensed) added to the family over time:

https://www.fontfont.com/fonts/quadraat

Majoor’s Scala (1990) is another simple, yet complete, typeface family:

https://en.wikipedia.org/wiki/FF_Scala

Finally, at the turn of the century, Frere-Jones created the Gotham (2000) typeface. Among other places, it featured prominently in Barack Obama’s 2008 presidential election campaign.

https://en.wikipedia.org/wiki/Gotham_(typeface)

Terminology

In an effort to better remember various suggestions and terms used throughout the Font portion of Thinking With Fonts, I created a terminology sheet.

I’m most likely to forget that there’s multiple different items which can be understood to be quotes, and how to use them. Additionally, that larger X-heights are easier to read at small sizes.

Common Fonts?

I started making a list of common fonts, but quickly realized that this was a complex and difficult task. I’m including what I made for completeness, but it seems like a superfamily (like Open Sans) will be fine for most of my work.

What’s next for me in Typography and Visual Design?

The book discusses Text next, after an exercise in creating modular letterforms on a grid. I’m looking forward to it, but I do need a break from it for now.

I’ve started trying to mimic existing visual designs (from the collectui.com website), as many folks have suggested it’d be the best way to get a feel for what works and how to do it. I’ll likely talk more about that here, once I’m further along in that process.

https://medium.com/media/b85dfbb5286d8a25cf2e754b9462cf45/href

Thinking With Type: Fonts was originally published in Prototypr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-12-03 04:59:06

The Linux foundation holds a lot of events where companies can reach the maintainers and developers across all the important open source software projects in enterprise, networking, embedded, IoT and cloud infrastructure.

Open Source Summit is the premier open source technical conference in North America, gathering 2,000+ developers, operators and community leadership professionals to collaborate, share information and learn about the latest in open technologies, including Linux, containers, cloud computing and more. This time they also organised a Diversity Summit.


I was really lucky and excited to be a part of the OSS NA 2017. I had sent a proposal earlier in May for delivering a talk based on my Outreachy internship with Linux Kernel from December 2016 - March 2017, which was accepted. The best part was connecting with my mentor Matthew Wilcox as well as a fellow intern who was also my co-speaker for the talk, Sandhya Bankar, though we all missed having Rik Van Riel, also my mentor for the Outreachy internship, and Julia Lawall, who manages and mentors the outreachy internships and has been really encouraging throughout.



With my mentor Matthew Wilcox

With Greg kroah-hartman
Coming to the conference, I had never experienced such a diverse collection of talks, people and organisations as in OSS NA 17. Even the backdrop of Los Angeles, its amazing food and places and wonderful networking opportunities truly made the experience enriching and magical.

With Sandhya Bankar at the partner reception at The Rooftop at The Standard.
At the evening event at Paramount Studios
At the Walt Disney Concert Hall with Jaminy Prabha
I couldn't understand the talks from CloudOpen and ContainerCon tracks much, but going around the sponsor showcase, I got a lot of background on what these very popular and upcoming technologies are about and got inspired to explore them when I go back. I had some very interesting discussions there. I attended many LinuxCon and Diversity track talks as well as keynotes which I found most interesting and took a lot home from them.

At the sponsor showcase with this amazing web representation of all the technologies there.
The opening keynote by Jim Zemlin really set up the excitement about open source and the next 4 days.



Next I really liked knowing about the CHAOSS Project and found it relevant to my research area at college. CHAOSS is a new Linux Foundation project aimed at producing integrated, open source software for analyzing software development, together with defining implementation-agnostic metrics for measuring community activity, contributions, and health.

Another highlight of the first day was the Women in Open Source Lunch sponsored by Intel. The kind of positivity, support, ideas and inspiration in that room was really one of its kind.

Being one of the few students at the summit, I found the talk, "Increasing student participation in open source" very interesting. Similarly the career fair was fun, getting to talk to engineers in popular companies and the job profiles that they are seeking. 

All the keynotes over the rest of the conference were really interesting, but the afternoon talks sometimes required too much attention to really get them and I was jet-lagged.

I particularly enjoyed the keynote by Tanmay Bakshi as well as meeting him and his family later. He talked about how he’s using cognitive and cloud computing to change the world, through his open-source initiatives, for instance, “The Cognitive Story”, meant to augment and amplify human capabilities; and “AskTanmay”, the world’s first Web-Based NLQA System, built using IBM Watson’s Cognitive Capabilities. It was inspiring to learn how passionate he is about technology at such a young age.


A highlight from the third day was my mentor's talk on Replacing the Radix Tree which was a follow-up thread to my outreachy internship and inspired me to contribute to the new XArray API and test suite.

I am grateful to all from the Linux community and all those who support the outreachy internships, and Marina Zhurakhinskaya and Sarah Sharp. I'm also really grateful to The Linux foundation and Outreachy programme for sponsoring my trip.

Rehas Mehar Kaur Sachdeva | Let's Share! | 2017-12-03 01:38:30

UX folks may be in the best position to identify ethical issues in their companies. Should it be their responsibility?

In the previous section, I described the state of UX practice at technology companies, and the need for high-level buy-in for successful UX integration.

There is a concerning — and increasingly evident — lack of ethical consideration in the processes of most software companies. In this section, I will describe some of the ways in which this has recently become more apparent.

Digital Ethics

The software in our lives are not generally designed with our health and well-being in mind. This fact is becoming clear as Facebook, Google, and Twitter are in the spotlight relating to Russia’s interference with our elections and increasing political divides. Twitter has also typically been unwilling to do much about threats or hate speech.

There is too much focus on engagement and creating addiction in users, and not enough on how things might go bad and appropriate ways to handle that.

Internet of Things (IoT)

There’s a proliferation of products in the Internet of Things (IoT) space, many of which are completely insecure and thus easily turned into a botnet, have the private information on them exploited, or hacked to be used as an information gathering device.

Effects on Kids

Some IoT devices are specifically targeted at kids, but few or no companies have put any effort into identifying how they will affect the development of the children who use them. Concerned researchers at the MIT Media Lab have begun to study the effects of intelligent devices and toys on kids, but this won’t stop the continued development of these devices.

Similarly, it’s unclear how the use of devices that were originally aimed at adults — such as Alexa — will affect the kids in those houses. On one hand, it doesn’t involve screen time, which is no longer completely contraindicated for kids under two but is still wise to limit. On the other hand, we have no idea how those devices will answer questions they were not programmed to handle. Additionally, these devices do not encourage kids to use good manners — one of the important lubricants for the fabric of society. It’s hard enough to teach kids manners without having that teaching undermined by an intelligent device!

Finally, consider how machine learning can result in some truly horrific scenarios (content warning: the linked essay describes disturbing things and links to disturbing graphic and video content).

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatize, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.
James Bridle · Writer and Artist

Willful ignorance: Twitter and Equifax

Similarly, we’ve seen the results of a focus on metrics and money over security and sanity. Twitter not only knew that there were spam and fake accounts from Russia and the Ukraine in 2015, but refused to remove them because

“They were more concerned with growth numbers than fake and compromised accounts,” Miley told Bloomberg.”

Equifax stores highly sensitive information about people in the US, and left security vulnerabilities open for months after being told about them. As a result, they had multiple security breaches, basically screwing over anyone whose data was stolen.

Yeah, no. You knew you had vulnerabilities!

Thoughtlessness: Google, Facebook, and Big Data

Even without willful ignorance, thoughtlessness alone can easily be enough to put individuals, communities, and societies at risk.

Considering the breadth of data that many companies are collecting on those who use their products, there is a worrying lack of thought given to the invasiveness of this practice and to how to safeguard the data in question. These companies often make poor choices in what information to keep, how to secure and anonymize the information, and who has access to that information.

Some might say that having conversational devices like Alexa and Google Home are worth the privacy risks inherent in an always-on listening device. Others might suggest that it’s already too late, given that Siri and Google Now have been listening to us and our friends through our phones for a long time now.

However, regardless of one’s thoughts on the timing of the concerns, the fact remains that tech giants have access to an amazing amount of information about us. This information is collected through our phones, through our searches and purchasing patterns, and sometimes through devices like the Amazon Echo and the Google Home Mini.

Some companies are better than others, such as Apple’s refusal to break their encryption for the FBI, but it can be quite difficult to identify which and where companies are making the best choices for their customers privacy, safety, and sanity.

Machine Learning

Take machine learning (also known as AI), and the fact that companies are more interested in selling ads than considering the effects their software has on their customers:

It’s not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. […] But it’s not the intent or the statements people in technology make that matter, it’s the structures and business models they’re building. […] Either Facebook is a giant con of half a trillion dollars and ads don’t work on the site, it doesn’t work as a persuasion architecture, or its power of influence is of great concern. It’s either one or the other. It’s similar for Google, too.
Zeynep Tufekci · Techno-sociologist

One of the major problems with machine learning is that we have _no idea_ precisely what associations any particular algorithm has learned. The programmers of those algorithms just say whether the output those algorithms provide is good enough, and often ‘good enough’ doesn’t take into account the effects on individuals, communities, and society.

I hope you begin to understand why ethics is a big concern among the UX folks I follow and converse with. At the moment, the ethics of digital products is a big free-for-all. Maybe there was a time when ethics wasn’t as relevant, and code really was just code. Now is not that time.

In part 3, I’ll discuss the positioning of UX people to more easily notice these issues, and the challenges involved in raising concerns about ethics and ethical responsibility.

Thanks to Alex Feinman, Máirín Duffy, and Emily Lawrence for their feedback on this thread!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-12-01 14:34:13

User Experience (UX) folks may be in the best position to identify ethical issues in their companies. Should it be their responsibility?

This will be a multi-part story.

In this first part, I’m going explain some of the problems inherent in the implementation of UX practices at technology companies today, to provide the background necessary to make my point.

You can also skip ahead to part two, in which I talk about ethics in the tech industry today.

First: Why do Businesses want UX?

Poor user experience = burning your money

Businesses are starting to realize that they need to incorporate UX to retain and increase their customer base. Discussions with Boston-area user experience folks suggests that companies have figured out that they need to have incorporated UX years ago, and that they’re behind.

Many of those businesses are so new to UX that they don’t understand what it means. Part of the reason for this is that ‘UX’ is an umbrella term, typically including:

  • user research
  • information architecture (or IA)
  • interaction design (or IxD)
  • content specialists
  • visual design

In addition, some UX teams include front-end developers, as it can otherwise be difficult to be certain that the developers implementing the interface have a basic understanding of user experience.

User Experience is complicated!

When looking for UX employees, some businesses end up throwing the kitchen sink into their job descriptions, or look for the extremely rare UX unicorn — someone skilled at all parts of UX as well as development. This unfortunately makes it approximately impossible that they will get what they need, or possibly that they will get any decent candidates at all.

Often, people expect the UX unicorn to be able to do all aspects of UX and write code. This version is more reasonable: to understand how coding works, even if you don’t do it.

Other employers prioritize visual or graphic design skills over the skills necessary to understand users, because they have gotten the impression that ‘making it pretty’ will keep their customers from leaving. Often the problem is at a much deeper level: the product in question was never designed with the user’s needs in mind.

Successful UX needs high-level buy-in

Unfortunately, UX professionals brought into a company without buy-in at the top level of the company nearly guarantees that the UX person will fail. In addition to their regular UX work, they will also be stuck with the job of trying to sell UX to the rest of the company. Without support from higher-ups in the company, it is nearly impossible for a single person to make the amount of change necessary.

Surveying local people, I learned that being the only UX person in a small company or startup is probably doable, if the company understands the value you bring. There are fewer people to convince, and usually fewer products to deal with.

However, being the only UX person in a big company will likely be an exercise in frustration and burnout. On top of the fact that you’re trying to do too many different things on your own , you’ve also got to try to keep the bigger picture in mind.

Some important long-term questions include:

  • “What are the right strategic directions to go in?”
  • “Are the things that you are creating potentially going to cause or enable harm?”

The second question brings us to the question of “who in high tech is thinking about the ethics of their creations?”. Unfortunately, too often, the answer is ‘no one’, which I will discuss in Part 2.

Thank you to Alex Feinman and Máirín Duffy for their feedback on this article!

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-12-01 14:32:33

To conclude my internship with Mozilla, I did a video presentation on Air Mozilla. This is a high-level overview of the solution I designed to automate accessibility testing.

Below are the slides, followed by the transcript and video.

Automating Web Accessibility Testing

Hi, my name is Kimberly Sereduck. I recently completed an internship with Mozilla through the GNOME Outreachy program. As part of the test engineering team, the goal of my internship was to automate web accessibility testing.

Which, for most people, probably brings up two new questions in itself:
What is Automated Testing?
What is Web Accessibility?

What is automated testing?

In order to ensure the quality of software and websites, software engineers and test engineers write tests to check that the product works as it should. Automated tests often run on a set schedule, or when an update is released.

What is Web Accessibility?

Accessibility is a word we have probably all heard before, but you may or may not understand what it means. Even while working as a web developer, I wasn’t sure exactly what it was, or why it was important.

Imagine for a moment that, when browsing the web, your knowledge of what a page contains was based exclusively on dialogue from a Screen Reader.

A screen reader is a type of software that will read aloud the contents of a page, including headers, links, alternative text for images, input elements, and more.

This technology allows users with limited vision to navigate the resources of the internet.

However, if a website is not designed with accessibility in mind, it can still be very difficult or even impossible for these types of users to understand what a page contains, or to interact with it.

This is just one example of a user who requires an accessible web. There are many different types of users, who utilize different types of assistive technologies.

This brings me to the topic of web accessibility testing.

Web accessibility testing is simply an analysis of how accessible a website is.

There are detailed standards of web accessibility called the Web Content Accessibility Guidelines, or WCAG for short.

An example of one of these rules is that all images must contain alternative text. This text would be read aloud by a screen reader.

If an image is used to help convey a point, providing an alt attribute allows non-sighted users to understand the context more fully.

In the example here, a graph is an image used to convey information, and not used just for display purposes.

A good alternative text would be to concisely describe what information the graph contains.

Problem Domain

Accessibility testing for websites, is, at this point, largely manual work. There are many tools that exist to generate accessibility reports for websites. These are mostly limited to browser-based tools.

Here is an example of one of the browser-based tools.

Most of these tools do not offer much in the way of customization or flexibility.

Most of them return a list or report of all accessibility violations found on a single page.

However, almost all websites are comprised of multiple pages, and some may even have dozens of different types of pages.

Using one of these browser-based accessibility tests would require someone manually running the tests on each different type of page, reviewing the results, and creating an accessibility report for the site.

For most companies that address the accessibility of their websites, this is done on an irregular basis, and is not integrated into their automated testing workflow.

An example of what I mean by automated testing workflow:

If an update is made to the website download.mozilla.org, which results in users not being able to download Firefox, there are automated tests in place to catch these errors.

This enables test engineers to be notified right away, rather than the problem going unnoticed until someone happens to catch it.

This type of testing is called regression testing. Regression tests make sure that features that worked before an update still work after the update.

What Problem Does This Solve?

The problem that this project solves is to integrate regression testing for web accessibility into this automated workflow.

So, if a site is accessible, and an update is released that makes it less accessible, these tests would notify test engineers of the new accessibility violations.

To make this possible, I have written software that allows python programmers to make use of a tool called aXe in their python tests.

aXe is a an API created by Deque Systems, a company that specializes in web accessibility.

aXe can be run against a web page and return a list of all violations of accessibility rules.

aXe also allows customization, such as including or excluding certain accessibility rules, or testing specific parts of a page.

The software I have written, called axe-selenium-python, maintains this ability to customize your accessibility tests.

The way this software works is that it is included in existing automated tests, adding accessibility checks for each page that the tests visit.

This puts accessibility on the same level as functionality, and accessibility reports will be generated every time a test suite is run, rather than manually and infrequently.

Creating a more inclusive web is a top priority to the people at Mozilla.

Mozilla has a dedicated accessibility team that does check their websites for accessibility.

However, we would like to give the test engineering team the ability to include accessibility checks in their automated testing workflow.

My work will enable Mozilla to test the accessibility of its websites on a more regular basis, and do it using much less time and resources.

Goals for the Future

Although the software I have written is functional and currently available for public use, there are many goals to improve it, making it more simple to use and more beneficial.

Package

There are three main goals for improving the package I have written. The first is to devise a production-ready solution for testing individual accessibility rules, rather than running a single accessibility check for each page.

The reason for this is two-fold.

Without individual tests, there is a single PASS or FAIL for accessibility. If ANY rule is violated, this test fails. This also means that the results of the failing rules are shown in a single report.

However, if individual tests exist, there will be a pass or fail for each accessibility rule that is tested for, making the results more readable.

Test engineers would be able to quickly see the number of violations, and which rules were violated.

Having individual tests also allows the ability to mark tests as expected failures.

In the world of test engineering, when a failure is spotted, typically a bug or issue is filed, and the test will continue to fail until the bug is fixed. It is not uncommon for some issues to go unaddressed for weeks.

In most cases, you don’t want to keep getting notifications of a failing test after a bug has been filed.

So the tests are marked as expected failures, and instead of being notified when they fail, the test engineers will be notified when the test begins passing again.

I want to enable this same functionality with my software.

While I have succeeded with a couple of different approaches, neither of them can be easily integrated into existing test suites.

I want my solution to be as hassle-free as possible, allowing users to easily add and customize these individual tests.

The third goal for improving this package is to generate more sophisticated accessibility reports. I want to allow users to view a single report for accessibility, one that is well designed and presents the information in a way that is easier to understand.

Company

A long-term goal that I have as far as the company is concerned is to develop an Accessibility dashboard for all of Mozilla’s web assets.

This dashboard would use the data from the most recent accessibility tests, and give each Mozilla website a grade or rating for accessibility.

This dashboard would be included with others that are currently displayed in several of the Mozilla offices around the world.

Producing this dashboard would increase the visibility and awareness of Mozilla’s accessibility status, which, in turn, would have a significant impact on Mozilla’s goal of creating a more inclusive web.

Community

Another goal I have for this project is to involve community contributors in Mozilla’s accessibility goals.

One idea for increasing community involvement is by adding a new feature to my axe-selenium-python package, or to write a new package altogether.

This feature or package would automatically file a bug when a new accessibility violation is found.

Most accessibility violations are fairly easy to fix, and require only a basic knowledge of HTML.

As such, they would make great First Bugs for people who have never contributed to Mozilla or an open-source project before.

This would help increase community contributions overall, and also to expedite the process of fixing these accessibility violations.

This feature could tremendously affect Mozilla’s inclusivity goals, and do so without adding a large workload to full-time employees.

Conclusion

More and more people every day have access to the internet.

Millions of these users have limited vision or limited physical ability, and rely on technologies other than a mouse and a screen to navigate the web.

The internet is a vast resource that opens up a world of knowledge that humans have never had access to before. It is becoming increasingly important in all aspects of life.

Knowledge, to me, is not a privilege, but a basic human right. As such, I believe that the resources of the internet should be made accessible to all types of users.

An accessible web also allows all people to participate more actively in society.

I am very fortunate to have had an opportunity to help create a more inclusive web. I hope to continue to do so, and make an even more significant impact in the world of accessibility and inclusivity.

If I seem terribly uncomfortable and out of breath, I was! I was almost 9 months pregnant when I recorded this presentation, so please, pity me.

The post Air Mozilla Presentation – Automating Web Accessibility Testing appeared first on Kimberly the Geek.

Kimberly Pennington | Kimberly the Geek | 2017-11-30 15:57:55

Ferris the crab, unofficial mascot for Rust [http://www.rustacean.net/]

Welcome, new Rustacean! So you have decided that you want to start learning Rust. You are where I was in summer 2017. I have read a lot of stuff online, tinkered with some byte-sized newbie contributions to the Rust project, and generally just hung out with the Rust community both virtually and in person (at Rust Belt Rust in Columbus, OH). In case you start feeling a little lost and overwhelmed on where to look, let me give you the Newbie’s Guide of what I found to be super useful. There is still a steep learning curve for Rust and you will need to put in the time and practice. However, certain channels will superboost your initiation into the friendly Rustacean community before you become lost and frustrated.

One thing to note is that the Rust language has rapidly evolved in the past couple years, so some of the posts and examples online from 2015 and earlier may not be considered idiomatic anymore. This post will be point you towards resources that are regularly updated.

Learning Rust

Most of the Rust docs are quite intimidating. I would start with the Rust book while completing Rust exercism exercises and tinkering with Rust code in play.rust-lang.org. You can import some external crates in the Rust Playground. (Examples and list of supported crates)

It’s important to actually be writing Rust code and running it through the compiler while reading the book. I had read a few chapters of the Rust book and thought I knew Rust, but then realized I didn’t really understand the concepts well when I tried to write code that compiles.

Checking out the Rust community

Cool! So now you know some Rust and getting an idea of the syntax, semantics and concepts. What next?

Check out https://users.rust-lang.org/. One thing that is so cool about the Rust Language project is that it is transparent in how the language evolves. It is fascinating to read the threads and learn about how decisions are made for language development as well as increasing the usage and ergonomics of Rust for everyone. This includes Rust for developers that write production code, hobbyists, embedded systems, IDE support, Rubyists, Pythonistas, “frustrated C++ programmers”, and everyone and anyone that wants to know and learn more about Rust! :D

Look for a local Rust meetup to meet other Rustaceans in your area.

Community Updates

  • Subscribe to This Week in Rust.
  • Look for easy and help wanted issues to contribute to the Rust projects. You can “watch” this thread and get notified about new requests for contributions.
  • There is mentoring for all experience levels during the Impl period.

READ

LISTEN

There are two podcasts I follow right now:

  • New Rustacean (host: Chris Krycho), broad range of topics on concepts, new crates, and interviews with key Rust team members)
  • Rusty Spike (host: Jonathan Turner — Rust community team), latest updates of happenings in Rust community and Servo project

WATCH

  • Rust talks online (Rust YouTube channel). There are 3 conference series: RustConf (West USA), Rust Belt Rust (Rust Belt, Midwest USA), and RustFest (Europe).
  • Follow @RustVideos on Twitter

Highlights:

If you are still looking for more links to add to your heap (heaps can grow, stacks not B-)): https://github.com/rust-unofficial/awesome-rust

Best of luck in your journey and hope to see you around in the Rust community!

Anna Liao | Stories by Anna Liao on Medium | 2017-11-29 17:29:57

Well hello

So I started to learn haskell.
And I know it's old news, but still, look how short haskell's quick sort implementation is!

qsort :: (Ord a) => [a] -> [a]
qsort [] = []
qsort (x:xs) =
let smaller = qsort (filter (<= x) xs)
bigger = qsort (filter (> x) xs)
in smaller ++ [x] ++ bigger

And my first steps with haskell are:
  1. learnyouahaskell
  2. exercism/haskell
  3. https://www.haskell.org/documentation
  4. Real World Haskell, but I must say, that this book is incredibly vague, obfuscated and is the opposite of clear. It is the most hard to get book that I've read so far, and it's not that haskell is complicated, no. It's the book.
  5.  Web services for haskell -- yesod (how-to).

Asal Mirzaieva | code. sleep. eat. repeat | 2017-11-28 06:12:40

Last week in Germany, a few miles away from the meeting in COP23 Conference of political leaders & activists to discuss climate there was a bunch, (100 to be exact) of developers and environmentalists participating in Hack4Climate to work on the same global problem – Climate Change.

COP23, Conference of the Parties happens yearly to discuss and plan action about combating climate change, especially the Paris Agreement. This year, it took place in Bonn, Germany which is the home to United Nations Campus. Despite the ongoing efforts by the government, it’s the need of the hour that every single person living on the Earth, contributes at an personal level to fight this problem. After all, we all have, including myself, somehow contributed to the hike in climate change either knowingly or unknowingly. That’s where role of technology comes in. To create a solution by provide pool of resources and correct facts such that everyone can start taking healthy steps.

I will try to put into words explaining all about the thrilling experience Pranav Jain and I had in participating as 2 of the 100 participants selected all over the world earth for Hack4Climate. Pranav was also working closely with Rockstar Recruiting and Hack4Climate team to spread awareness and bring more participants before the actual event. It was a 4 day hackathon which took place in a *cruise* in front of the United Nations Campus. Before the hackathon began we had informative sessions from the delegates  of various institutions and organisation like UNFCC – United Nations Framework Convention on Climate Change and MIT Media Lab, IOTA, Ethereum. These sessions helped us all to get more insight into the climate problem from a technical and environmental angle. We focussed on using Distributed Ledger Technology – Blockchain & Open Source which can potentially help to combat climate change.

1 (1)
Venue of Hack4Climte – The Scenic Crystal Cruise stopping by the UN Campus in Bonn, Germany  (Source)

 

The 20 teams worked on creating solutions which could be fit into areas like identifying and tracking emissions, carbon pricing, distributed energy, sustainable land use, and sustainable transport.

Pranav Jain and I worked on Green – Low Carbon Diamonds through our solution, Chain4Change. We used blockchain to track the carbon emission in the mining of the mineral particularly, diamond. Our project helps in tracking the process of mining, cutting, polishing for every unique diamond which is available for purchase. It could also certify a carbon offset for each process and help the diamond company improve efficiency and save money. Our objective was to track carbon emission throughout the supply chain where we considered the kind of machine, transport and power being used. The technologies used in our solution are Solidity, Android, Python & Web3JS. We integrated all of them on a single platform.

We wanted to raise awareness among the common customers by putting the numbers (carbon footprint) before them such that they know how much energy and fossils were consumed for the particular mineral. This would help them make a smart and climate friendly and a greener decision during their purchase. After all, our climate is more precious than diamonds.

All project tracks had support from a particular company, who gave more insights and support for data and business model. Our project track was sponsored by EverLedger, a company which believes that transparency is the key to ensure ethical trade. 

Copy of H4C-Slides
Project flow, Source – EverLedger

Everledger’s CEO, Leanne talked about women in technology and swiftly made us realize how we need equal representation of all genders to tackle the global problem. I talked about Outreachy with other female participants and amidst such a diverse set of participants, I felt really connected with a few people I met who were open source contributors. Open source community has always been very warm and fun to interact with. We exchanged what conferences we attended like Fosdem, DebConf and what projects we worked on. Outreachy current round 15 is ongoing however, the applications for the next round 16 of Outreachy internships will open in February 2018 for the May to August 2018 internship round. You can check this link here for more information on projects under Debian and Outreachy. Good luck!

Lastly and most importantly, Thank you Nick Beglinger, (CleanTech21 CEO) and his team who put up this extraordinary event despite the initial challenges and made us all believe that yes we can combat climate change by moving further, faster and together.

Thank you Debian, for always supporting us:)

A few pictures…

2
Pranav Jain picthing the final product
4
Scenic Crystal, Rhine River and Hack4Climate Tee

Chain4Change Team Members – Pranav Jain, Toshant Sharma, Urvika Gola

Thanks for reading!


Urvika Gola | Urvika Gola | 2017-11-26 12:39:28

This blog was created because I am supposed to report my journey through the Outreachy internship.

Let me start by saying that I'm biased towards systems that use flat files for blogs instead of the ones that require a database. It is so much easier to make the posts available through other means (such as having them backed up in a Git repository) that assure their content will live on even if the site is taken down or dies. It is also so much better to download the content this way, instead of pulling down a huge database file, which may cost a significant amount of money to transfer that amount of data. Having flat files with your content with a format that is shared among many systems (such as Markdown) might also assure a smooth transition to a new system, should the change become a necessity at some point.

I have experimented some options while working on projects. I played with Lektor while contributing to PyBeeWare. I liked Lektor, but I found it's documentation severely lacking. I worked with Grav while we were working towards getting tem.blog.br back online. Grav is a good CMS and it is definitely an alternative to Wordpress, but, well, it needs a server to host it.

At first, I thought about using Jekyll. It is a good site generator and it even has a Code Academy course on how to create a website and deploy it to Github Pages that I took a while ago. I could have chosen it to develop this blog, but it is written in Ruby. Which is fine, of course. The first steps I took into learning how to program were in Ruby, using Chris Pine's Learn to Program | versão pt-br. So, what is my objection with Ruby? It so happens that I expect most of the development for the Outreachy project will be done using Python (and maybe some Javascript) and I thought that adding a third language might make my life a bit harder.

That is how I ended up with Pelican. I had played a bit with it while contributing to the PyLadies Brazil website. During Python Sul, the regional Python conference we had last September, we also had a sprint to make the PyLadies Caxias do Sul website using Pelican and hosting it with Github Pages. It went smoothly. Look how awesome it turned out:

Image of the PyLadies Caxias do Sul website, white background and purple text

So, how to do one of those? Hang on tight, that I will explain it in detail on my next post! ;)

Renata D'Avila | Renata's blog | 2017-11-26 12:00:00

A lot of companies out there seem to want UX visual design skills more than they want UX research skills. I’ve often felt like I’m missing something important and useful by not having a strong grounding in visual design, and have been searching far and wide for some ideas of how to learn it.

One of the more interesting suggestions I have had relates to typography: many websites have typography and grid principles incorporated into them, so that is a good place to start. I’ve also had a number of suggestions to just make things, with pointers to where to get ideas of what to make. Below are the suggestions that make the most sense to me.

Typography to start?

A helpful fellow volunteer (Tezzica at Behance and other places — trained in graphic design with a UX aspect at MassArt) at the UX Fair offered me a number of useful ideas, including the strong recommendation that I read the book called “Thinking With Type”, by Ellen Lupton. This books is, if nothing else, a very entertaining introduction to the various types and type families. There is the history of various fonts and types, descriptions of the pieces of a piece of type, and examples both good and bad (she calls the latter “type crimes” and explains why they are type crimes). I’m only 1/3 of the way through it, so I’m sure there’s a lot more to it.

Tezzica also suggested that I take the SkillShare course by the same author, Typography that Works. Given that I currently have free access, I am in fact doing that. Some of what we’ve covered, I knew from previous courses (grids, mostly), and some recapped a bit of what I’ve read in the book thus far. Reminders and different types of media are really useful.

I’m unexpectedly bemused by the current section, in which we are to start designing a business card. While I found the ‘business card’ size in Inkscape, I’m not completely sure that I’m managing to understand how to make the text do what I want it to do. I suspect that a lot of visual/graphic design is in figuring out how to make the tools do what you want, and then developing a better feel for ‘good’ vs ‘bad’ with practice! (I’m currently playing with Gravit Designer, which is a great deal easier to use while still being vector graphics.)

I’ve also had a chat with one of the folks I interviewed about getting a job in Boston, Sam, who had gotten a job between me talking to him and interviewing him. He also strongly suggested typography, and seems to have already worked through a lot of the problems I’m struggling with: not a lot of understanding of how visual design works, but a strong pull toward figuring it out.

Another thing that Tezzica mentioned was assignments she’d had in school where basically they had to play around with type. In one, the challenge was to make a bunch of graphics which were basically combining letters of two different typefaces into a single thing, or a ‘combined letterform’.

What do graphic design students do?

Tezzica suggested that it would be useful to peruse Behance for students of RISD and MassArt and see where the samples look similar, and potentially identify the assignments from classes at those schools. I have thus far not been successful in this particular endeavor.

Another possible way to find assignments is to peruse tumblr or pintrest and see if any old assignments or class schedules are still there. Also thus far unsuccessful!

Both Tezzica and Sam suggested doing daily challenges (on Behance, since the accounts there don’t require someone else to invite you) using ideas from dailyui.co or dribble. Tezzica also suggested taking a look at common challenge solutions and seeing if there’s an interesting and different way to do it. Tezzica also pointed out the sharpen.design website and its randomized design prompts.

Sam suggested taking a website that I like the look of, and trying to replicate it in my favorite graphic design tool (this will probably end up being Inkscape, even though it’s not as user-friendly as I’d like), and pointed out that it could go onto my portfolio with an explanation of what I was thinking while I did it.

Coursework

Tezzica suggested a Hand Lettering course by Timothy Goodman and a Just Make Stuff course by him and Jessica Walsh (this one being largely about ‘making something already’). She also suggested Nicholas Felton’s Data Visualization courses (introduction to data visualization, and designing with processing). Both are on Skillshare.

Sam suggested I watch everything I can from John McWade on Lynda.com, and a graphic design foundations: typography course also on lynda.com.

Other training methods

Finally, Sam recommended taking screenshots and making notes of what I notice about sites that are interesting or effective and why.

This reminds me a bit of my periodic intent to notice what design patterns and informational architecture categorization methods websites use.

Mostly, I need to train my eye and my hand, both of which require practice. Focused practice, and I think between Sam and Tezzica, I have a good sense of where to go with it. At the moment, I’m focusing on the Thinking With Type book and course, as otherwise I’ll overwhelm myself.

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-11-22 23:37:38

Hello Folks! I am writing this to inform you all that I am moving to medium platform.

Medium is amazing, I was thinking of doing this from last few months as I spend lot of my time reading articles on medium.  I hope to see you all around 🙂

Here is the link to my profile: https://medium.com/@atbrakhi

 


Rakhi Sharma | aka_atbrakhi | 2017-11-22 10:11:56

It is such a good feeling every time I am being in a Mozilla event.
For the 8th edition Mozfest was organized and was so awesome.
Before the official start I had the chance to socialize with other people from different communities and it is one of the best ways to gain knowledge of the most interesting project going on by different people.
It was interesting since, there wasn’t everyone from Mozilla or Mozilla contributor but also people who were engaged in a total different thing but still had something unique to share with others.

On Saturday afternoon I had a talk at decentralized session about: “Mozilla as a role model for online diplomacy.”

It was a good opportunity also to catch up with other people from different work groups. We had a Mozilla Reps photo, which was go nice getting all in the same place after a long time no see, and definitely we couldn’t miss a group photo with all Tech Speakers <3

The event was structured in a very nice way, there were 5 sessions and each of them had a different talks or workshops going at the same time and people had a LOT of choices and could go in every session they wanted. Decentralization, Security and Privacy, Digital Inclusion, Open Innovation and Web Literacy.

MozFest was really a unique experience for me and would definitely love to get back there.

PS: It was the first time for me in London as well, and it was exactly I was told it was, no surprises at all.

See you all hopefully at the next MozFest.

Till then enjoy Firefox Quantum 😀

Kristi Progri | Kristi Progri | 2017-11-21 14:55:06

DSC04990

I look better than this, at times, I think.

I started with working for Business Partner and later moved to Product Master Application area. Both belongs to Master Data Management SAP S/4 HANA.

  • Master Data? This was first explained by my buddy in our first meeting. There is Transactional Data and Master Data. He shared, it is the central source and core information which would not change frequently. This is used as base for any transactions like selling, purchase, procuring, producing, stock transfer e.t.c. Transactional data is which would change is not a core but more based per transaction.
  • Business Partner? A Business Partner can be a office, a group of people, an organizations in which SAP’s customer’s company has a business interest. Let’s say the companies or offices, Nestle has a interest in.
  • Product Master? This contains information on all the materials that a company store, sells, procures or produces. It can be about a car, car part or even a consulting service.

During my time with SAP:

  1. In Manage Customer Master Data and Manage Supplier Master Data I enabled extensibility feature. Extensibility is something which is specific to customer. Using this feature customer can add more fields in the tables, have it on the UI, have their own checks on them i.e., extending the SAP delivered application in a way which suits their need. These two applications were built from scratch using Smart Template, BOPF, CDS, OData.
  2. I then moved to supporting extensibility in Manage Product Master. Enabled extensibility for more parts of application in further releases with this was also responsible for keeping all up in not so stable development systems.
  3. Responsible for enabling Product Master Hierarchy Type using classification data. The path contains classes and characteristics. This was a highlight as IS-Media Integration with Product Master Application in the year 2017. To understand this in generic terms, a product can have associated categories based on classes and characteristics. A car can be in different colors or features. Cars can also be divided into classes. With this, integration of three areas which were classification, Media Hierarchy and Product Master was implemented. I wrote a blog post which has more details.
  4. Solely responsible for back-end for enabling Article Hierarchy in Product Master application. This was challenging for me given the requirements, discussions and planning happened. Worked on enabling the display scenario on the front-end. Aricle Hierarchy was shipped as separate, OData Service, UI5 component and BOPF component. It was enabled as a loosely coupled component with Product Master Application without having any associations to the main entity Product/Material. I wrote a blog post which has more details.
  5. With SAP moving forward to cloud, security is primary concern in SAP digital core. I was responsible for enabling row level authorization of CDS views using DCLs for Product Master. CDS views are more powerful way with which every field exposed from SAP’s table can be controlled. Every CDS view will have a SQL view attached with it. CDS views can be considered as the endpoint to have access to the specified columns or part of tables. SAP ships these CDS views with the application, which also means the customers or 2nd level customer can have access to data like address, accounts information if not protected. To protect these data accesses Data Control Language is used. This blog post has more details.
  6. I was solely responsible for keeping 400+ CDS views automated test covered with 100% coverage using ABAP CDS Test Double Framework. This was released after I joined Product Master Application area. Also was responsible for ODATA Automation and for automation of authorization test for the DCLs protected CDS views.
  7. While working on all these I enhanced core methods, API; mainly backend, worked on high priority customer incidents. Responsible for core and transactional CDS views changes for enhancements done in Product Master Fiori Application to handle Maintenance Status; dynamic field handling.
  8. SAP is going to file a patent on the invention submission that I ideated and gave it a better shape with my colleague Venkat Bhargav A.S. A super nice person who is different than what he seems, btw. I was surprised that this silent, all time formal person, has a sense of humour; first time, during a call.
  9. I got to meet this person and heard him speaking about everything he does with his family and team. There are NGOs and organization who won’t make you feel slightly uncomfortable, but this one makes you touch so many different possible aspects of human life. I can’t imagine what goes in his mind when he starts his every new morning.
  10. Became friends with her ♥. She wrote/drew for me and also cooked on my birthday. Me haven’t.DSC04744.JPGI I do not have permission to share any other photo of her.
  11. Observed change of leadership, how things can change, worked with different teams, people; plan to write and talk, sometime in future.

If you need any information about the technical details that I mentioned above, shoot me a mail or Tweet to me; would be happy to talk starting from most basic and minor details, the information which is available on the help guides and documentation.

Also..

  1. I sprinted on OWASP Hackademic project in their summer of code program.
  2. Became Maintainer for Systers Portal and Volunteer Management System. Worked as Google Summer of Code, Google Code InOutreachy mentor for these two projects.
  3. Organized and mentored Learn IT girl!, second season.
  4. Received Anita Borg Systers Pass-It-On Award Fall 2015 to work on improving the number of women in open source.
  5. Invited to speak in Django Con Europe’15(Views in Django), Linux Con Europe’15(Faults in Linux 3.x), PyCon UK’15(Share Your Code @PyPI), Open Source and Feelings’15(Don’t get scared, get started!), Latinity Conference’15(Faults in Linux 3.x), FOSSCON’15(Faults in Linux 3.x), GitHub CodeConf’16(Contributing to Linux Kernel), PyCon UK’16, Django Under The Hood’15 and ‘16; all fully funded; at DTU, IEEE Delhi Section SAC on getting started with open sources and in FOSSMEET’16 at NIT Calicut on Contributing to Linux Kernel(Workshop) and Getting Started with Contributing to Open Sources(talk).
  6. Became Quora Top Writer, 2016. A question and answer site with Alexa rank of 159. Mostly written about open sources, programming and computer science.

SAP! ☞

Numbers

To relate it with the market and how has it grown you can read the history here. If you read here, in global developments,

SAP Labs China marks the ninth opening of a development location outside of Walldorf. This and the other research centers in India, Japan, Israel, France, Bulgaria, Canada, and the United States help SAP convert IT expertise into business utility for its customers. The company now employs around 30,000 employees, approximately 17,000 of whom work outside of Germany.

The total headcount now is 87800+ in more than 130 countries. 365,000+ customers in more than 180 countries, with SAP S/4HANA celebrating 1,000 live customers. SAP is huge and growing!

SAP Labs India was founded in November 1998; current numbers here.

Technologies

Mostly SAP!

Big old royal chunks are written using SAP ABAP and Java, mostly former. With SAP Fiori, the goal is to simplify the user experience in Cloud and ERP.

SAP Fiori Apps library has all apps which are Fiorified. Mostly they are built on a stack which includes

  1. Smart template
  2. SAP UI5
  3. OData services
  4. Business Object Processing Framework (BOPF)
  5. CDS views
  6. ABAP Programming Language
  7. SAP’s internal testing tools, all follow setup and teardown methodology.

Different software component categories are (taken from wiki)

  • SAP_BASIS is the required technical base layer which is required in every ABAP system.
  • SAP_ABA contains functionalities which is required for all kinds of business applications, like business partner and address management.
  • SAP_UI provides the functionality to create SAP UI5 applications.
  • BBPCRM is an example for a business application, in this case the CRM application
  • SAP ABAP is an ERP programming language.

Show and Tells With Other Teams

SAP is huge hence they have many different developments going on everyday, in different locations of the world. There are JAM pages for special interests. You are free to schedule a show and tell with someone outside your team if you wish to learn what they do. I did these since the start but gained a bit more momentum after finalizing my leaving date. I would suggest to schedule regularly from the start to be more closer and deeper into the bigger picture.

Fancy! ☞

Campus and Events

SAP Labs Bangalore campus is big.

20151106_171328

There is a badminton court(♥), lawn tennis court, play room, nice gym, cafeterias, path for runners, big parking area, all the cool things to make you feel comfortable!

All festivals are celebrated with same enthusiasm. Rangoli, special food menus; I also heard dhol sounds in campus at times, people singing and dancing.

DSC04595

Events would be going on all over the year. Hackathons, talks, special interest talks, workshops, sports competitions. SAP organizes family day, annual day, fresher’s day, went to school to teach kids, do paintings with them.

Had buddy who would helped during initial months to make me familiar with a bit about how things work.

During Diwali time

DSC04579

Foood

DKOoR7rVoAAJbvj

There were team outings to different locations and resorts, dinners, lunches, LOB events.

DSC04816

I had some or the other work during these times, so skipped kind of most of the above.

SAP4Good events

I think I attended most of them them as and when I got the chance.

The best part handled by good planning and support was, after a while we were all free to talk, see everyone around and interact with people in most of these visits.

There were InspireUs events where people like Rahul Dravid, R.Madhvan came and gave talk. My friend is a fan of both, I attended one, at the time of second one I left SAP; she didn’t attend, bad me.

I heard they also invited Dr. APJ Abdul Kalam sometime back in 2010, sadly I don’t think I can hear him in person now.

Work Culture

great place

It depends on your team; on you. In releases, I vouched for more work and then worked more.

There are shuttle with Wi-Fi enabled, there were late night cabs; with different timings for both of them. I tend to walk till 3 to 4 kilometers of commute so didn’t used shuttles much.

You can read about employee ratings here and here. Many more are covered here by Mahuya Paul.

I am sure I have missed quite a few things in the fancy ones.

Pheww, with this I have joined Mapbox engineering team, woot! 🎉

 


Tapasweni Pathak | Sieve | 2017-11-20 03:52:36

This is my first blog post 🙂  I wish to use this blog to share my technical pursuits with everyone. I often indulge in philosophical discussions and am intrigued by them! I will try to share them as well if possible.


Soumya | A Keen Observer | 2017-11-19 19:30:24

In this article I want to describe different ways of building a debian package. As an example, I am going to use osmocom-sccp package, the observed tools are sbuild, debuild and dpkg.

Sbuild [1][2][3]

Sbuild allows us to build a .deb pack from the corresponding Debian source. It also takes into account all the required dependencies. So, there is only one problem – to provide a prepared source package. To obtain it we have to use other instruments.

Another question is where to do all those builds. Most of the time we don’t want to build new packages on in our system because that would lead to problems related to great diversity of installed versions of different packages. Thus, we want to create a restricted environment like chroot. In our case we can create it with sbuild-createchroot  [4].

So, let’s create our first package. First of all, we need to obtain our source package. In my case with libosmocom-sccp I can download it from git repo:

git clone git+ssh://kobr-guest@git.debian.org/git/debian-mobcom/libosmo-sccp

Don’t forget that you have to copy your public key to server. Now we want to install sbuild and set up a build environment:

sudo apt-get install sbuild

sudo apt-get install sbuild

sudo sbuild-adduser kira

newgrp sbuild

sudo sbuild-createchroot –include=eatmydata,ccache,gnupg unstable /srv/chroot/unstable-amd64-sbuild http://deb.debian.org/debian

And build the package:

sbuild -d unstable libosmo-sccp

Now we should have our build packages in the current folder. But we have only built the package from unstable repo.


Kira Obrezkova | Site Title | 2017-11-19 11:38:27

Debugging the ALPHA!

d3 force layout_1

While I have worked with D3.js in the past, the d3-force layout library was entirely new to me and it was very interesting to learn the concepts behind the physical simulations in order to understand the networks and hierarchies of the first and the third party trackers for Lightbeam. It did take enormous blood and sweat to understand this force layout in depth, but the end result is what I am super proud of today.

Simulating physical forces

The d3-force layout module implements a velocity Verlet numerical integrator for simulating physical forces on particles. Verlet integration is a numerical method used to integrate Newton’s equations of motion. In mathematical physics, equations of motion are equations that describe the behaviour of a physical system in terms of its motion as a function of time.

The d3-force layout module

In simple words, this module creates a simulation for an array of nodes and links, and composes the desired forces. As the nodes and links update, they are rendered in our preferred graphics systen, Canvas or SVG, by listening for tick events.

After initialising the simulation through d3.forceSimulations(nodes) and registering the links force, it took lot of trial and errors to register the right amount of other forces and achieve the correct simulation.

Screen Shot 2017-11-17 at 15.38.35
Lightbeam graph using d3-force layout

simulation.tick()

This is part of the d3-force layout and this was the trickiest of all!

I might have read the below lines a dozen times in d3-force layout documentation, but I was never able to figure out how alpha truly worked. As I am writing this post, I am very happy that I finally understand how this alpha works.

The tick function increments the current alpha by (alphaTargetalpha) × alphaDecay; then invokes each registered force, passing the new alpha; then decrements each node’s velocity by velocity × velocityDecay; lastly increments each node’s position by velocity.

This sounds very simple as I write. After all, what is the big deal with this alpha? 🙂 But as stated earlier, I didn’t know the true importance of alpha, and as a result, I didn’t know how to stablise the graph. 

May the ALPHA rest in peace 😉

 


Princiya Marina Sequeira | P's Blog | 2017-11-18 16:56:12

This is an initial attempt to improve the graph’s performance for large number of nodes using web workers. This PR is a work in progress.

The idea here is to off-load the heavy force layout computations to the web worker. This improves performance during page load.

I am also experimenting on passing the logic to web workers during the drag events. This part is still work in progress.

Here is the rough idea that I try to achieve:

y9y0prnWQH

In the present situation, when we drag, the force layout restarts the simulation. When there are large no. of nodes, the graph takes a lot of time to achieve equilibrium, thereby dropping the frame rates. Restarting the simulation is necessary, to compute the updated coordinates and give the soft transition, else the drag effect doesn’t appear smooth.

Using web workers, I try to keep the dragged coordinates in the main thread, invoke the web worker during dragEnd. The web worker computes the updated coordinates and waits until the graph attains equilibrium. I need to figure a way to get the updated coordinates and achieve smooth animations.

Web workers

Web Workers is a simple means for web content to run scripts in background threads.

A Web Worker is just a general purpose background thread. The intention here is to run background code such that long running tasks do not block the main event loop and cause a slow UI.

Motivation to use web workers for Lightbeam

Considering the single threaded nature of JavaScript, using web workers we can take off the burden from main thread when we have to deal with heavy computational work, which is true for Lightbeam. When the number of nodes increase, the force layout takes a lot of time to stabilise. The UI (main) thread is busy calculating the force layout values and in turn slows down the UI drastically. It becomes difficult to interact with the graph at this point.

When we started this project, performance was the key factor and we were happy to see the improvements canvas did over svg. Now we are expecting to see huge performance benefits via web workers.

Stay tuned for more updates!

 


Princiya Marina Sequeira | P's Blog | 2017-11-18 08:01:24

Renata's picture, a white woman profile. She touches her chin with her left hand fingertips

About me

Hello, world! For those who are meeting me for the first time, I am a 31 year old History teacher from Porto Alegre, Brazil.

Some people might know me from the Python community, because I have been leading PyLadies Porto Alegre and helping organize Django Girls workshops in my state since 2016. If you don't, that's okay. Either way, it's nice to have you here.

Ever since I learned about Rails Girls Summer of Code, during the International Free Software Forum - FISL 16, I have been wanting to get into a tech internship program. Google Summer of Code made into my radar as well, but I didn't really feel like I knew enough to try and get into those programs... until I found Outreachy. From their site:

Outreachy is an organization that provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

There were many good projects to choose from on this round, a lot of them with few requirements - and most with requirements that I believed I could fullfill, such as some knowledge of HTML, CSS, Python or Django.

I ought to say that I am not an expert in any of those. And, since you're reading this, I'm going to be completely honest. Coding is hard. Coding is hard to learn, it takes a lot of studying and a lot of practice. Even though I have been messing around computers pretty much since I was a kid, because I was a girl lucky enough to have a father who owned a computer store, I hadn't began learning how to program until mid-2015 - and I am still learning.

I think I became such an autodidact because I had to (and, of course, because I was given the conditions to be, such as having spare time to study when I wasn't at school). I had to get any and all information from my surroundings and turn into knowledge that I could use to achieve my goals. In a time when I could only get new computer games through a CD-ROM and the computer I was allowed to use didn't have a CD-ROM drive, I had to try and learn how to open a computer cabinet and connect/disconnect hardware properly, so I could use my brother's CD-ROM drive on the computer I was allowed to and install the games without anyone noticing. When, back in 1998, I couldn't connect to the internet because the computer I was allowed to use didn't have a modem, I had to learn about networks to figure out how to do it from my brother's computer on the LAN (local network).

I would go to the community public library and read books and any tech magazines I could get my hands into (libraries didn't usually have computers to be used by the public back then). It was about 2002 when I learned how to create HTML sites by studying at the source code from pages I had saved to read offline in one of the very, very few times I was allowed to access the internet and browse the web. Of course, the site I created back then never saw the light of the day, because I didn't have really have internet access at home.

So, how come it is that only now, 14 years later, I am trying to get into tech?

Because when I finished high school in 2003, I was still a minor and my family didn't allow me to go to Vocational School and take an IT course. (Never mind that my own oldest brother had graduated in IT and working with for almost a decade.)

I ended up going to study... teacher training in History as an undergrad course.

A lot has happened since then. I took the exam to become a public school teacher and more than two years had passed without being called to work. I spent 3 years in odd-jobs that paid barely enough to pay rent (and, sometimes, not even that).

Since the IT is the new thing and all jobs are in IT, finally, finally it seemed okay for me to take that Vocational School training in a public school - and so I did.

I gotta say, I thought that while I studied, I would be able to get some sort of job or internship to help with my learning. After all, I had seen it easily happening with people I met before getting into the course. And by "people", of course, I mean white men. For me, it took a whole year of searching, trying and interviewing for me to get an internship related to the field - tech support in a school computer lab, running GNU/Linux. And, in that very same week, I was hired as a public school teacher.

There is a lot more... actually, there is so much more to this story, but I think I have told enough for now. Enough to know where I came from and who I am, as of now. I hope you stick around. I am bound to write here every two weeks, so I guess I will see you then! o/

Renata D'Avila | Renata's blog | 2017-11-17 02:49:00

It’s 2017 and it seems that plain text emails are just boring for most users. Communicating without the emoji, GIFs or images seems rather stiff and your brand is better than that! It’s approachable and fun, or sleek and high-end, or in some other way remarkable. No plain text email will do.

The world of HTML email is the Wild Wild West of the World Wide Web. It’s basically writing websites like it’s 2004. Forget flexible user interfaces or interactive elements - it’s built on tables upon tables, with no JavaScript and poor CSS support. If you want your design not to break on major clients, you need to carefully place and think through each element, keeping in mind all the various ways that your design could break.

How is email displayed?

The email is sent from a server to an email address that is hosted on another server. This server then shares the email with various email clients (e.g. the Mail app on iOS), that decide how to display the content. The server and/or email client can strip away some or even all elements that are not plain text - specific or all HTML tags, styles, images, links - leaving the user with the end content, that is then being displayed.

The main difference between an email and a website is that you can’t know what email client the user is going to choose and so you can’t send a specific version to each user - you send the same email to everyone. You can’t detect beforehand if it’s Firefox or Chrome, Gmail or Outlook, you can’t swap images for smaller versions on mobile. You send an email and it sits in your client’s inbox - as soon as they get it, you have no control over how it’s going to be displayed and can’t adjust accordingly.

Understand your users

If you ever went through building a static website or a web app, figuring out how to support the features you want in most browsers was hard enough. Nobody expects every web app to work perfectly on every setup - that would take insane amounts of work and give very low marginal gains. That’s why web developers are telling you they’re going to support the modern browsers like Chrome or Firefox and no-one in their right mind wants to support older Internet Explorers*.

Testing with tools like Litmus or Email on Acid will help you get a better grasp of how your design works in most popular email clients - but there can still be issues with clients that are not being tested by those tools. It is impractical to expect the email to work in all clients - that’s why it’s good to research how your clients are using email.

  1. What webmail providers and/or email clients are they using? Which of those give you most marginal gains and you need to support them? Consider having a cutoff point for the share of various software in your client base - adding support for additional clients takes time and might also cost you some features, maybe it’s not worth doing for 0.01% of your clients.

  2. How are they accessing their inbox? Do they read emails on their desktop or their phones? Decide if you want to design mobile- or desktop-first.

  3. What email campaigns worked best for the measures that are important to you, e.g. how many of your users click links in emails? Think about the things you want to include in your email - you’re taking the time to design it, so make sure you’re not spamming your users with irrelevant content.

Focus on the content

There are many reasons your users will only read plain text emails - some of them do it by choice, some of them by necessity, e.g. if they are visually impaired, some simply use email clients that only support plain text. That’s one thing every email client does and they do it pretty well.

Before you design your email, take a moment to focus on the message - why are you sending an email to your users? If you were to strip away all the design, what should be included in it? How should you divide it into paragraphs and make sure the users get your message?

Assuming that your users will always be able to read the HTML version of your email might be overly optimistic. Don’t assume - assure that your message always gets through and use plain text as your main tool.

There are also things you should consider including in your email, to improve user experience no matter the client:

  • subject - clear and concise, usually under 50 characters
  • professional sender address and name - preferably with your company name
  • pre-header text - some email clients present it in the inbox as a preview of an email
  • footer - preferably with unsubscribe options
  • link to a web-hosted version of the email

Add the design gradually

You might want to include more than just plain text in your message - your logo at the top, an image or a GIF that would make it more appealing, a nice button to encourage users to click the link.

Keep the design simple. Add it in gradually and make sure you always have fallback options, e.g.:

  • for every image have a fallback text that displays before the images are loaded, e.g. Gmail often loads images after a user action, so take the time to look at your design with placeholder text instead of images

  • many clients, including Gmail, won’t support custom fonts (unless the user has them already installed on their computer) - use web fonts to make the design more appealing on some clients, but always have a fallback option like Arial or Helvetica handy; check how your design looks with the fallback font

  • if you’d like something slightly more fancy, like a video or an interactive element, keep in mind it won’t be supported in most email clients and simply link to it in your email in an appealing way

Rethink flexibility

In web development, we’re getting more and more used to flexible, seamless designs that flow with the ever changing screen size of our devices. Emails, even though they use the same basic blocks of HTML and CSS, are not that advanced - to adjust for different devices, you need to make your design fluid rather than flexible.

  1. Some email clients don’t support media queries - if you want to go from two-column design to a single mobile column or even just adjust the font size, keep in mind Android or any number of less popular mobile clients might not agree with you.

    There are two main breakpoints in email - 600px for desktop and 320px for mobile - but the most bulletproof way is to avoid them altogether, by either using a device-agnostic single-column design or making your design fluid rather than flexible. Think percentages, not breakpoints, define margins and paddings rather than set sizes.

  2. Don’t expect emails to be pixel-perfect - you might really like those rounded corners on your buttons, but making them work in older clients is going to take a lot of work using images and careful placing, that might break on some clients regardless of all the effort.

    Start with a simple, flat design and add improvements for more modern clients, but don’t expect them to work across all devices. Just make sure that adding additional elements or flavors is not breaking your base design.

  3. You can’t control your users - some users adjust their setups and increase the default fonts, make their displays monochrome or block images by default. You won’t know how your users see the email exactly and that’s ok - if you design with a fluid approach in mind, they’ll still be able to read your message and find the relevant information.

Mind the quirks

There are many effects and elements you might be used to that cause mind-boggling issues in HTML emails. Some of the more common ones are:

  • custom fonts - you can get away with web fonts, but not all clients support them; also, you have no control over user-specific settings, e.g. if they’re using an increased font in Android, the email client might render it regardless of all your efforts to avoid it

  • overlapping elements/layers - you can’t overlap elements in emails, other than adding a background image, and even that might not work in some clients, but careful use of whitespace might help you achieve a similar effect

  • putting text in symmetrical shapes - there are no flexible CSS properties to ensure the shape will stay symmetrical, so it’s better to use ovals than circles and rectangles than squares

  • motion/animation - email clients don’t support JavaScript and most of them don’t work well with CSS animations, so there is virtually no way to make them interactive

  • hiding overflow - if you want to truncate text, do it before sending the email, as hiding overflow might be problematic

  • gradients - they won’t display in many clients and can cause a solid background color to obscure text

  • non-image shapes other than rectangles - some email clients won’t support border-radius for circles or rounded corners, most don’t play well with more complicated shapes - try to keep it simple with rectangles and/or images

This is not an exhaustive list, but it can give you some ideas on where to simplify your design.

Reading code

If you have some experience with regular web development, you might expect the code to be written in a certain way. HTML email code breaks many of the best practices one gets used to.

  1. In modern web development, using the !important keyword too much (or ever) or inlining styles are often considered code smells. In HTML emails they might the only way to display the email as intended - it’s not uncommon to see an inlined style with !important, just to force this one stubborn email client to render it properly.

  2. Email code is not DRY - you can define both a background property on a table and add a declaration for the background and background-image in the inlined style for the same element, just because some clients support the table property, some understand only the background shorthand and there are those quirky ones, that only render the background-image if forced by a more specific declaration. Oh, and don’t forget about this class that also sets the background with !important.

  3. Arranging elements in email uses tables, which is the most widely supported HTML element across all clients. If you want to make sure your layout is going to work, the number of tables within tables can increase dramatically with each element.

Summary

Designing emails is different from designing websites - email clients have at best patchy support for CSS and often require writing code that breaks the best practices of web development.

  1. Understand your users - have a list of email clients you want to support, understand what to include in your emails and how to appeal to your user base. Decide if you want to design mobile or desktop first.

  2. Focus on the content - write a plain text email first, structuring it by using paragraphs, bullet points, links. Make sure to always include plain text version of your message.

  3. Add the design gradually - the simpler the email, the better the chance it’s going to work well on multiple clients. Add in fallback for your features (images, fonts, videos) and make sure to check how your message looks using only the fallbacks.

  4. Rethink flexibility - think fluid, not flexible. You can’t control how your user is going to see the design, so let go of pixel-perfect and make sure to design with percentages and paddings, rather than set sizes.

  5. Mind the quirks - the simpler, the better. Email clients have many quirks and the more complicated the design, the higher the chance it’s going to look bad in some clients.

  6. Reading code - email clients require you to write code that’s not in line with web development best practices. Get used to !important and loads of tables.

  1. Xfive 10 Tips for Better Email Design
  2. Tuts+ The Complete Guide to Designing for Email
  3. Sigstr How People ACTUALLY Read Your Emails
  4. Campaign Monitor CSS Support Guide for Email Clients
  5. MailChimp Email Design Guide
  6. Webdesigner Depot The ultimate guide to email design

*though it is sometimes necessary to support IE8, but that's mostly because target user group uses outdated software

Alicja Raszkowska | Alicja Raszkowska | 2017-11-16 20:49:00

Do you use xargs? I’ll be making a small utility using it (going to post about it soon).

As I started reading up on it, I figured its a pretty cool command and can be really beneficial at times to execute multiple commands in one go. But, while reading and trying the commands, I had a question in mind,

do-i-really-5a0d6f

And so, I searched about it and obviously, I was not the first one to get this question in mind. Below is the compilation of things that I figured and some I found on the internet.

Firstly, for those who do not know what xargs is, here’s the stolen definition (come on, you can’t blame me, I’m a programmer) from man page:

xargs reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or newlines, and executes the command (default is /bin/echo) one or more times with any initial-arguments followed by items read from standard input.

Definition is pretty forthright but who reads definitions, right? ( 😛 )

The command when executed without any arguments, produces the following result:

Screenshot_20171116_164516

which is pretty awesome because its no results at all. Who am I kidding (You!), right?

Here comes the exigent role of the Definition from man page which we royally chose to ignore at first. xargs executes echo command by default which means if we type anything in the console now and then press Ctrl + D, we’ll get the result of providing anything as an argument to echo command.

Screenshot_20171116_165435

you-see-what-5a0d76

Now, let’s jump to some nice examples and scenarios where we can use this command. I found that a very common usage is to find all the files with a particular name/extension.

Screenshot_20171116_183524

(Please don’t be flattered by the ethereal nomenclature of files.)

But, what’s there so fancy about it? Was xargs even needed? The result would be the same had I done the following:

Screenshot_20171116_183609

What if I need to find files with multiple extensions, would find still be a friend?

Screenshot_20171116_183706

Not really.

But, remember what xargs does, it feeds the argument from stdin to the command which means there should be a way of passing multiple parameters to the command such that it does not fail and give out the desired result. Well, there are two such ways.

  • -L max-lines

Use at most max-lines nonblank input lines per command line.

This means that since find command is capable of handling one argument at a time, we should use xargs this way:

Screenshot_20171116_183934

During the execution of the this command, you can provide multiple arguments one at a time since find is capable of handling only that.

There is a better way though.

  •  -n max-args, –max-args=max-args

Use at most max-args arguments per command line.

If you think previous option is not very useful as you will have to wait for completion of execution of one command at a time and then provide args for running the command again, you can go with the smarter option.

Screenshot_20171116_184117

This works exactly how you want find to work. We provide n=1 because we want the args to be passed individually (because of the restrictions with find command).

Works like charm. Note that in both the cases above, find command runs multiple times with the arguments. xargs does not override the default behavior of command.

There are cases when you’d have spaces, new line characters, etc (not etc etc 😉 ) in the file names, those can be handled with some options too.

Some really cool stuff

yayyyyyyy-5a0dd4

While stumbling upon different sources about usage of xargs, I found a different and awesome one on stackoverflow.

Screenshot_20171116_233018

Link to question here.

Thanks for reading. Write down in comments if you have used xargs for some better cause.


Shivani Bhardwaj | Imbibe Linux | 2017-11-16 18:29:46

Well hello

Coursera was the sublimation of the ultimate good on the Internet: free courses on programming, machine learning, probabilistic graphical models and so much more! But alas, all good has to come to an end. As of Dec 2017 Daphne Koller and Andrew Ng has made quite a few steps towards monetizing the platform (and who can blame them? they are a startup after all). All the legendary courses, whose names were well known among my fellow programmers were either deleted (Standford Compilers) or turned into paid specializations (Stanford Algorithms and even the Probabilistic Graphical Models by the previously mentioned founder of the Coursera Daphne Koller!).


Compilers course logo / openclassroom.stanford.edu

 But, there's still some fun to explore for me and my fellow penniless adventurers! And here is the top of best free courses on coursera, 2017 reality check:

1. Algorithms I and Algorithms II by R. Sedgewick of Princeton.


/ coursera.org

The top of the top in my personal courses list of all time. Sedgewick, who actually invented the red-black tree with his colleague in Stanford, in addition to being a real life grandpa from Up! is a brilliant teacher, it is quite impossible not to understand what he wants you to understand :) He goes sometimes into so much details, that I had to listed at speed 1.5x, but for some people it could be quite a good pace. Also, you get to solve 5 puzzles in each course for free! And they are really cool. Highly recommended, 100/100.

2. Machine learning by Andrew Ng of Stanford.


 / coursera.org

The legendary founding father of coursera keeps his first course free of charge, and I'm quite happy with that! Extremely engaging lecture and even more so programming assignments keep you busy for the whole of 11 weeks! Loving it!

3. Cryptography I by D. Boneh of Standford.


/ coursera.org

In his work, Boneh goes through all the basics and details of -- you guessed it -- cryptography. I would say, the course contains enough material for the whole undergraduate course.

4. Introduction to Mathematical Thinking by K. Devlin of Stanford.


/ coursera.org

Haven't seen a lot of this course, but it was enough to decide to definitely return to it in the future.  Introduction to Mathematical Thinking will help you develop a creative way of thinking and solving all kinds of problems, for example, how to prove a theorem, that nobody was able to solve before.

5. Game Theory by   M. O. Jackson and K. Leyton-Brown of Stanford and British Columbia.


/ coursera.org

Never thought what to do if we play the game against another player, that knows that we play the game and also knows what is our best strategy, as do we know about his. Who will win in this situation? Game theory can answer that. It's not your usual application on game theory for competitive programming problems, neither is it a game development course, but it's quite interesting and kinda gets you on the other plane of thinking.


And down here, let me have a quick rant on coursera specializations. They ruin all the magic of free education, they really do! And the platform is constantly shoving them down my throat to that extent that it's impossible to search for courses only, one can only search for the keywords, like "machine learning", and then get 100000 "only $49 a month!" specializations, which is absolutely not what one was hoping for. 

So, coursera, please give us our free education back! I understand you need machine time to review the programming assignment, but come on! I can volunteer you some of mine, and I think many of us will! I will even pay for certificates, I promise :)

Asal Mirzaieva | code. sleep. eat. repeat | 2017-11-15 12:05:49

Well hello


The year is getting to its end, who would not think it's the most suitable time to analyze the books I've read so far?

1. The magic of thinking big by D. J. Schwartz

 / penguin random house /

The most inspiring book so far.


2. Rich dad poor dad by R. T. Kiyosaki

/ wikipedia /
#1 personal finance book of all time? Looks more like garbage to me.


3. You are a badass at making money by J. Sincero
 / kobo.com /
To some point inspiring. Bullshit to a lot of other points.


4. #GIRLBOSS by S. Amoruso

 / goodreads /

Total bullshit, a poetic retelling of main character's bio.


5. Take your time by Easwaran
  / goodreads /

Total bullshit, the content of this small book can be expressed in one sentence: less bustle, more meditation.


6. Accelerated C++ by A. Koening and B. E. Moo


/slant/

As the authors state at the beginning of the book, it is a transcript of the course they used to teach to C programmers. Agonizingly obvious at times and mediocre at others.


Next year I'll try to read at least one book a month.

Asal Mirzaieva | code. sleep. eat. repeat | 2017-11-15 11:56:21



Goku Kamehameha Death Star shirt

Debate; Can Goku completely destroy the Death Star with one Kamehameha wave? Ok nerds, imagine the Death Star firing it’s super laser at the Earth, and just before we’re all blown into a zillion pieces, Goku uses the Kamehameha wave and gets into this huge push battle against the Death Star laser shoving the laser right back into it while also obliterating it with his own Kamehameha, sending all those Imperials straight to hell and saving…

Julia Lima | Thu thm t uy tn | 2017-11-15 04:59:34

This is the second episode of the BaseCS podcast.

Encoding was on my reading list from a long time and I am very happy to check it off today.

basecs_2

 

basecs_3

Resources

 


Princiya Marina Sequeira | P's Blog | 2017-11-11 17:33:28

If you are following the tech feed on Twitter, then you couldn’t have missed the news of a new Podcast being released to teach Computer Science fundamentals. It is the BaseCS Podcast with Vaidehi Joshi & Saron Yitbarek. If you haven’t listened yet, please do!

Tech-cartoons is my new found hobby and this series of posts is my attempt to listen and learn from this wonderful podcast.

In this episode, let’s learn about bits, bytes & binary.

basecs_1

Resources:

bits


Princiya Marina Sequeira | P's Blog | 2017-11-10 04:55:16

I’m a researcher and interaction designer who’s been teaching myself UX for nearly two years. I’ve recently started trying to learn IA, and have been looking for jobs in the Boston area for most of that time. I’m not really sure what I’m looking for from a mentor, other than perhaps sympathy for the amount of difficult finding a job has been and maybe connections? (https://suzannehillman.com — for the curious)

Suzanne Hillman | Stories by Suzanne Hillman on Medium | 2017-11-09 23:28:13

I recently got inspired to try pure CSS single div animations, as I came across this blog post (thank you, Gabrielle!). There were only two problems:

  1. I can’t draw
  2. I don’t know enough about CSS

It seemed impossible to tackle without a lot of effort, so I’ve been putting it off, until I realized the more I think about it, the harder it gets. I decided this is more of a learning challenge than it is a problem. Would I ever be able to draw well enough to be a designer? I don’t think so. Would I be able to have fun creating a small project from time to time and see improvements? I betcha.

How do I usually approach learning a new programming concept?

  1. I read about it and find a tangible example
  2. I look around the example and break it in a controlled way, to make sure I understand what happens
  3. I come up with a small enough problem of my own that uses the concept I want to learn and keep the example as reference, googling for other ideas and suggestions
  4. I incorporate the concept into other projects, to make sure I understand it

For the animations, I decided there are a few key factors:

  • I want to make them cute, so that I’ll share them and get encouraged to create more
  • I want to write quality code, not just make them look good
  • I’d like to be able to plan my approach before coding, either by drawing them by hand or breaking into building blocks

I can’t draw

The more I work on my laptop, the less I write, so my handwriting went from cute tiny letters to barely legible tiny squiggles. And even though I can draw a passable cat (meaning - my youngest sister can identify it as a cat without much trouble), I don’t think I can draw.

For the initial experiment, I decided to explore what makes for a cute cat. I looked for various tutorials and thought of a few key features:

  • pointy ears, usually a bit rounded
  • cute little nose, sometimes heart-shaped
  • big eyes, with or without snake-like pupils
  • tiny mouth, often smiling or meowing
  • whiskers, usually 2 or 3 on each side
  • oval or rounded head

I then explored them on paper, to get a better grasp of what I need.

Finally, I worked to recreate this cute cat animation and it sort of, kind of worked. Granted, it’s not my own design and I didn’t fully understand what was happening with some parts of it, but I’ve learned to decompose an image and then translate it into basic shapes, colors and layers. Also, it wasn’t yet a single div image - but I was getting closer.

I don’t know enough about CSS

Don’t get me wrong, I know enough CSS to create websites and work on HTML email templates (which require you to write CSS like it’s 2004), but I’ve never gone into more intricate styling techniques.

For one, I am not a designer and designers usually provide me with either images or relatively simple UIs to code. It doesn’t make much sense to use CSS images in a production environment - who has the time to write and review them?

But CSS seems like a good medium for me to draw - I can build on basic shapes as I deconstruct images and the examples are reproducible (which I can’t yet say about my hand-drawn images).

I looked for more examples and came across this talk by Wenting Zhang. I decided to follow along with the tutorial and create a simple mustache animation.

It was a great exercise, as it taught me a bit more about box-shadow, most importantly, that there is no limit on the number of shadows and they can be moved around, stretched and recolored. It also helped me start treating :before and :after pseudoelements as layers.

I then proceeded to create a few more animations, trying to get a good grasp of how I should divide my images between those layers. It turns out there is a lot of flexibility with shadows, but reshaping is not their strongest feature - they mostly stay the same shape as the initial layer and, although they can be manipulated with border-radius, since I wanted more than a few basic shapes in my drawings, I needed to learn more.

Still, I could now create pretty complex shapes with three layers and box-shadows, like this animation of the Recurse Center logo.

Creating shapes with a single div is sometimes quite challenging and can use all three layers (the div, its :before and :after). One a tad more complex shape I wanted to use was a chevron, in order to create a simple animation of the Xfive logo, similar to the one on their website, but created with pure CSS. For this I needed to learn and understand gradients better, so I turned to the CSS Secrets book - after doing a few exercises, I felt ready to create multiple gradient layers and mix them up with pseudoelements and box-shadows.

What’s next?

I want to further improve both my drawing and CSS skills. I’m planning to do a bigger project soon, but for now I’ll try to do a few CSS animations a week - you can check out my GitHub repository for the animations’ code and my CodePen to view them (along with some bits and pieces from various old projects).

Alicja Raszkowska | Alicja Raszkowska | 2017-11-04 21:45:00

I participated in the Increasing Rust’s Reach program from August to November 2017. My project was #7 Finding Missing Pieces in the Crates Ecosystem with Andrew Gallant. I quickly learned that Andrew, aka @burntsushi, is well known in the Rust community for developing ripgrep, a command line search tool that is faster than the Silver Searcher and grep. Just a web search on ripgrep will show you how often it is mentioned by others. Visual Studio search is powered by ripgrep. I was in awe that I was selected to work with Andrew, AND he is a really nice and knowledgeable guy to work with.

The plan for my project was to take a small application in Python and port it to Rust. Andrew is a member of the Rust Libraries team that oversees the Rust standard library, rust-lang crates, conventions, and ecosystem support. The goal is to see how my project could reveal gaps and missing functionality in the Rust library and crates that a new user may encounter. The Library team could then consider how to make the library more usable.

At my previous job at Lawrence Berkeley National Laboratory (a U.S. Dept of Energy lab), I built systems to monitor energy use in buildings and urban systems for research to inform energy policy. I was primarily using Python and open source libraries for my projects. I became interested in learning a modern systems programming language, such as Go or Rust, to see how it could improve performance in our systems.

I decided on writing a Rust application for the Raspberry Pi. I also have the Sense HAT add-on board with LED matrix. I focused on writing out to the LED matrix, for example, this Python example to display a dynamic rainbow pattern.

It essentially comes down to translating the set_pixels() function and supporting function _get_fb_device().

It requires representing the pixel map with the correct type representation, reading a file, composing the path for another file, converting the pixel map to the binary representation, and writing to that file.

This seemingly basic and simple application took me down the journey to learn and discover:

  • Types
  • Options and Results
  • ? operator
  • Error handling
  • File handling
  • Path and PathBuf types
  • Parsing String and &str
  • Bit manipulation and 16-bit RGB565 representation
  • vec vs arrays, and the clone trait. Objects allocated on the heap vs stack.
  • You can import some external crates to play.rust-lang.org. (example) List of supported crates
  • “I don’t understand the compiler error. Let me try adding some muts and &s and see if it compiles.”
  • “Yay, it compiles! I have no idea why that worked! I’m not even sure if it’s actually still correct for the functionality I was working towards or that I just shut up the compiler”
  • Debugging Rust with breakpoints, not officially supported. However, you can use gdb/lldb. One option is use gdb/lldb with Visual Studio Code.
  • Many times, I was frustrated that I knew exactly how to express what I wanted to code in Python, but was clueless as to how to write it in Rust. Thus, learning a new language :)
  • Many times, it really just came down to Andrew telling me exactly what the code snippet should look like. And then of course it worked. :P

Through this project, I learned much of the Rust basics, but made slow progress towards accomplishing the goal I defined at the beginning of the Rust Reach program. It’s ok, as Rust is known to have a steep learning curve, and now I can write Rust code a bit more fluidly without getting stumped every few characters. I plan to continue to stumble along even after the program ends. I am almost to the point of writing out the file in the RPi, but still working out file write error.

In the following weeks, I plan to publish blog posts that discuss in detail the lessons I have learned during this project. For example, I plan to write a post on String vs &str and vec vs array, specifically in the context of my project. I also finally learned what turbofish is at RustBridge, so I may dedicate a post to explain that also. Note that I’m still learning Rust, so my explanation may be flawed and I welcome constructive criticism to improve my understanding of these topics.

Big thanks to: Andrew Gallant for volunteering as my mentor for this program; Carol (Nichols || Goulding) for coordinating the Increasing Rust’s Reach program and organizing the Rust Belt Rust conference; and the many other awesome members of the Rust Team and community for encouragement and advice during my learning process.

I will be giving a talk on this project at PyCascades (Jan 2018) in Vancouver, B.C., Canada. I’m hoping to have made some more progress on it by then!

Anna Liao | Stories by Anna Liao on Medium | 2017-11-04 20:28:43

I  had the amazing opportunity of conducting a workshop on Aframe (WebVR) at Alva’s College of Engineering, in Mangalore (a city about 350 km from Bangalore) on October 28th 2017. The participation was quite overwhelming (over 70 students attended the workshop). I spoke about Mozilla’s goals in general and about the importance of WebVR in the future. This was followed by introduction to open source philosophy. I then covered up the basics of HTML, CSS and javascript to get them started. This was followed by the Aframe hands-on. We concluded the day long workshop with the students trying out A-frame projects on themes of their choice. Students from all four years of engineering and different branches turned up. I had the opportunity of interacting with the HODs of the Computer Science department at the College. The co-ordinators Prarthana and Trupti were extremely helpful and the college has a great campus and I was overwhelmed by their hospitality. This being the first ever Mozilla event at their college, I look forward to a lot many other events happening at this college as a part of the Moz-Activate initiative.


Rutuja Surve | rutujasurveblog | 2017-11-04 18:13:54

You Gotta Love Frontend (YGLF for short) is an annual conference for Frontend Developers held annually in Tel Aviv. Well, next year it will be held in Kiev, but…

I was lucky enough to get a ticket form Outreachy travel allowance, because of my participation in their intern’s program. I just returned from there and, let me tell you, it was great. The speakers and the talks were truly top notch.

I thought it would be nice to capture some impressions and thoughts with a short listicle, so without further ado, I give you my own, curated 5 Best Talks of YGLF 2017 in Tel Aviv:

Amazing talk on
CRAFTING FASTER IN THE DARK
by Richard Feldman (RedInk)
9:30 AM – October 30, 2017

Summary: Richard show us the beuty of Elm, the FE functional proframming language that he fell in love with latly. He show us its great and smart Elm compiler and tell us how he got himself 4 hours of Elm coding without a Wi-Fi…
His Git example repo is here

Crazy css new structuring on
CSS IN A COMPONENTS WORLD
by Ido Rosenthal (Wix)
11:15 AM – October 30, 2017

Summary: Ido suggested how we can write typed CSS that can abstract a component’s internals with a stylable API, keeping the power and simplicity of CSS with http://styleable.io

How to scan more mature CVs and not to fall in the buzzwords on
FRONT-END ENGINEERING – YOU ARE HOLDING IT WRONG!
by Boris Litvinsky (Wix)
15:45 PM – October 30, 2017

Summary: Boris talked about the flood of tools and frameworks in the front-end development world caused both the developers and their employers to lose sight of the things that really matter – software craftsmanship. It’s demand for Angular over TDD, SASS over SOLID. This loss of focus from the essentials is in the long run damaging and sadly – it is all self-inflicted. In his talk Boris intreduced us the CDD (CV Driven Design) and went over the long-term effects and explore solutions.
It was so refreshing to hear a more mature voice speaking beyond new technologies, which are important, but not more than good old software programming principles.

Let’s see how your prodectivity can increase on
AVOIDING PRODUCTIVITY MOUSE TRAPS
by Alex Wolkov (Fundbox)
11:45 AM – October 31, 2017

Summary: Alex covered a few specific scenarios that if improved, will unlock a path towards faster and more productive workflow.
His slides are here

And last but most surely NOT least –

No need for FE frameworks on
STENCIL: THE TIME FOR VANILLA WEB COMPONENTS ARRIVED
by Gil Fink (Taboola)
14:30 PM – October 31, 2017

Summary: Gil showed us how web development changed dramatically during the last years. With the enormous amount of JavaScript libraries and the new HTML5 standard, today it is much easier to create web apps. When building a web app, you will probably want to reuse some of the web components you built. He showed how we can do that with the current state of HTML.

His Stencil introduction is here

Other awesome talks that didn’t make the list

WHEN EAST MEETS WEST, WEB TYPOGRAPHY AND HOW IT CAN INSPIRE MODERN LAYOUTS
by Chen Hui Jing (Wismut Labs)

REAL-LIFE TIPS ON HOW TO BOOST YOUR DEVELOPMENT TEAM’S PRODUCTIVITY FROM WITHIN.
by Daniel Lereya (DaPulse)

ANIMATING VUE
by Sarah Dresner (Microsoft)

BIG BANG REDESIGN: SMASHING MAGAZINE’S 2017 RELAUNCH, A CASE STUDY
by Vitaly Friedman (Smashing Magazine)

Parting Words –
All in all, the conference was a great, mind-expanding experience. My thanks go to the devoted organizers and the speakers, you made this conference awesome!


Ela Opper | FoxyBrown | 2017-11-01 09:50:37

The Open Source Summit Europe is, sadly, over. In this post, I will show you a glimpse of the conference. It’s been an intense week, I’ve met a lot of enthusiastic people, I’ve attended great talks, I caught up with the latest tools and projects and I feel I am part of a remarkable community of smart and passionate people. It’s like having friends all over the world, everyone with different backgrounds, different stories, but bonded by the same love for technology.

In the first day of the conference I was a little nervous. I didn’t know many people, it was the first event of this kind I was attending, so everything seemed a little overwhelming. Luckily, on that day, I had lunch with other women attending the conference thanks to the “Women in Open Source Lunch” event, sponsored by Adobe. I got to know other people, share stories, take pictures, make friends. Great initiative! As a suggestion, I believe we should also have a similar networking event for newcomers, so they can mingle with each other and make friends. After the lunch, I attended some talks and no longer felt alone, because I’d see a familiar face everywhere.

One of the talks I’ve attended was called “Fast and Precise Retrieval of Forward and Back Porting Information for Linux Device Drivers” by Julia Lawall. Julia updated us on tools that help collect information needed to port device drivers for newer or older kernel versions. I’ve learned about “Prequel”, a tool that queries git commit histories and provides a list of most relevant commits, that can help you solve your porting issue. The results are preceded with a percentage value that represents how likely it is that the respective result is relevant for your situation.

Another talk I’ve attended is “printk() – The Most Useful Tool is Now Showing its Age”. It turns out that providing a very good, efficient implementation of a tool like printk() is more complicated than it seems.

printk

“Bash the Kernel Maintainers” - was fun to attend. It aimed to improve the communication between maintainers and developers and to find solutions for different issues encountered by people who want to contribute to Linux Kernel. Luckily for me, the maintainer of Industrial Input/Output Subsystem, for which I’ve written my driver, is very responsive and kind. I had no problems reaching him, or getting feedback from him.

Having the chance to present the work I did, over the summer, meant a lot for me.

talkPresentation

I’ve presented all the steps I’ve taken in order to build up a driver for the air quality sensor CCS811. I also talked about my experience as an Outreachy Intern.

I also participated in the Kernel Developer Panel moderated by Jonathan Corbet.

kernelPanel

I was very excited to be there. I was also very nervous. The room was full, the other speakers were, all, more experienced than me, so I felt a little pressure, given I was the newcomer. Fortunately, Jonathan was a great moderator and thanks to him I got a little more confident.

jc

I believe that attending events like this one motivates you even more to keep contributing to open source. When I was sending the patches for my driver I was very enthusiastic to contribute to Linux Kernel and I was focusing on improving my programming skills to be able to make even greater contributions and build useful tools. But I wanted to get to know the people I was communicating with via email and I wanted to interact with them face-to-face. Now that I was able to do just that, I feel part of the family and can’t wait to find more open source challenges.

I’ll leave you now with this beautiful picture of Prague, and hopefully, we’ll meet again in Edinburgh, next year!

prague

Narcisa Vasile | narcisaam.github.io | 2017-10-30 00:00:00

Conclusions made from reviewing Udacity students’ code (for any level programmer).

  • Simpler is better.  Sometimes highly intelligent people write convoluted code. I think they feel the complexity must reflect their intelligence. However the downside is this makes the code harder to share and harder for another programmer to debug. The best code I’ve seen balances readability with abstraction and conciseness. Sometimes super abstract code is harder to understand or is error prone because it doesn’t handle all basic use cases.

 

  • Don’t code for yourself, code for the next programmer. This is closely related to the first point. But it speaks to items of consideration like including just enough inline comments and providing readable code. Writing code which is as clear as possible is considerate to those who share your code. This is reflecting a concept that as a programmer, you have 2 types of users (the application user and developers who work with your code base).

Carol Chung | CarroTech | 2017-10-29 18:38:07

As a Code Reviewer for Udacity, I go over a lot of students’ code. These are some observations based on common patterns.

  • Make sure to understand requirements. A lot of students are in a hurry to turn in the solution and get to the next assignment. Sometimes assumptions are made about the requirements or the student revises the requirements to what they think the app should behave like. Making assumptions takes more time to fix in the long run. Ask questions and get clarifications before you start to plan the solution.

 

  • Get the basics right first, then add enhancements. A minority of students are eager to show off nice to have enhancements like design or added features or complex interactions. But sometimes this extra effort is made before basic requirements have been met. Make a checklist and meet the core requirements first. Then add special features. Then refactor. Sometimes overly abstracted code leads to more bugs. Test everything on your checklist.

 

  • Try to code without libraries at first (as much as possible). Try to code with vanilla JavaScript before frameworks. Try to code with plain CSS before Bootstrap/Sass. By all means, learn the JS frameworks and CSS pre-compilers and accompanying libraries. But try to code everything by hand first (for ex. sometimes using a library to handle forms in react or redux abstracts the logic that makes it difficult to troubleshoot when an input handler goes wrong).

 

  • Show that you care about your work. Looking at someone’s code and app UI/interactions, it is quickly obvious how much pride the coder takes in their work. It is clear when the student is just trying to pass or is truly bringing their A game. When taking a course, really try to learn the best practices and ask questions. Skimming through a course (even when busy with a full-time job) is not really getting value out of the experience. Take the time to learn to do the job right during a class because frequently when the skills are put to use in a job, there may not be enough time to dot all your i’s and cross all your t’s. You’ll want to know all the tradeoffs you are making and why.

 


Carol Chung | CarroTech | 2017-10-29 18:26:28

Me and my co-intern at Lightbend were lucky enough to get the opportunity to attend The Reactive Summit 2017.

Reactive Summit

It is a conference about Microservices ,Fast Data and Distributed Systems which are the trends of today. For the first time this year, they offered a packed workshop schedule for us. These hands-on training workshops were designed to help improve our understanding of microservices, fast data, and domain driven design. We could choose from 1-day, 2-day, or 3-day training:

  • Lightbend Reactive Architecture – 3-day course

  • Developing Microservices Using Actors and Domain-Driven Design – 2-day course

  • Being Reactive with Vert.x and Kubernetes: From 0 to (Micro-) Hero – 2-day course

  • Beyond the Basics: Designing Actor-based Systems at Scale – 2-day course

  • Lightbend Akka Streams – 1-day course

We went for Lightbend Reactive Architecture 3 day workshop. We learnt a lot about designing domain driven programs, need to drift towards microservices and the problems associated with object oriented programming. So our visit to Austin started with this workshop.

Then on 18th October, the events of the conference started . Registrations followed by key note presentation which was followed by Welcome Reception and Exhibition.

These events do not exist in the vacuum, they are perceived in their immediate historic context with sprinkles of one’s personal life events. We get the opportunity to interact with high tech people, all very expert in their own domain.

During this exhibition, we came across very popular exhibits by Yoppworks, IBM, Paypal, Squbs and many more.There was one exhibit on Lagom as well , the project I worked on .It was really a proud moment to be part of the Lightbend and Lagom community.

The exhibition time gave us some time to meet our team members , our mentors and other Lightbend community members in person.Since it was the first conference I was attending, I wanted to grab each and every moment out of it. I was really excited to see how it all goes on.

There was a planned Reactive Summit party at night with The Spazmatics.

The next day all the events of the conference started with a bang. There were many interesting talks going on at the same time. Unfortunately, we could attend only one out of them. It was really tough to choose one out of them. All of them seemed equally promising and interesting.

It is rather unfair to identify a talk as the best, since there were many great ones, but these ones hit me directly in the feels.

The ones which attracted me most and I found really really interesting were “Building Microservices that Scale and do not Fail ” , “Microservices, The Future of Society, and all that … (or modularity for extroverts and introverts) ”, “Fast Data with Apache Flink at ING - lessons learned from designing and building a large streaming analytics system ” , “From CRUD to Event Sourcing - Why CRUD is the wrong approach for microservices ” and “Designing a reactive real-time data platform: Architecture and Infrastructure Challenges ”.

All the great concepts and talks aside there is still a feeling that we just like to pat ourselves on the backs. It is pleasant to feel that we are better than the rest of the world because we are woke developers because we are aware. Sobering talks helped, but these things are still prevalent in tech and especially at tech conferences. Its a common belief that programmers and developers are mostly male, meeting few equally good female developers felt really great.

On and on , the Reactive Summit provided me a really mindblowing experience which I will cherish all my life .

Ruchika Lakhina | blog | 2017-10-29 00:00:00

I had the good fortune of attending LinuxCon Europe, 2017 in Prague, Czech Republic. I met Julia, who was my mentor and is literally a super woman. She does research while managing Outreachy and replying to hundreds of the patches that come in. She had two talks at the conference!!

I met my mentor Rik, who is kind and motivating and quite literally the best mentor ever. He has always been encouraging, always urged me to ask more questions, regardless of whether they were stupid or not. I will be forever grateful for to Rik for stepping in as my mentor. I met Matthew Wilcox who has been a kernel hacker since almost 20 years now! Matthew introduced me to quite a few people who have been kernel hackers since the very beginning :) They reminisced about stories from when Matthew’s signature used to be ‘Listen Bill we know you made great software but look at the what we are doing here’ :D

This is me speaking about PIDs at the conference! (Thank you for the picture Ann :))

I met my fellow Outreachy Linux Kernel interns, Bhumika, Eva, Narcisa, Sayli and Varsha. I met Vaishali who is now a kernel engineer (and essentially invented her own position at her company). I met Ann Barcomb, who spoke at the conference about managing casual contributors in open source (She was also the second programmer at Booking.com!!!). I met Keerthana who did her GSOC with Debian and is now a mentor for Outreachy for Debian. I met Jaminy who also did her GSOC with Debian. I met Ines and Kara who are a part of Rails Summer of Code.

I spoke about PIDs and how they were allocated and the changes that I made. I talked about why we used IDR API and the stats after replacing the bitmap with IDR API. I have never seen so many people at one place who are so passionate about what they do. Matthew introduced me to kernel engineers who work at places like Facebook, Google, etc. and when I asked them what is it that they do, they replied with ‘Whatever interests them in the kernel and whatever it is that needs fixing!’. Rik told me about how when he moved from Connectiva to RedHat he took his to-do list with him. My biggest takeaway was that you can be working for/with the kernel for 20+ years and there will always be something new, something old to fix and you will probably be never bored(or is that survivorship bias, i don’t know :)).

TL;DR I met super talented people, got to speak at LinuxCon and possibly found allies for life!

P.S. For those interested, you can find my slides here.

Gargi Sharma | Stories by Gargi Sharma on Medium | 2017-10-28 16:41:33