When small differences are meaningful

My father’s office was always a mess. His desk was covered with so many documents in so many layers there was only room to place something on the desk by piling it on top of something else. To an extent I inherited this I also kept messy rooms and desks. I realized this was because I’d treat objects almost as “to do” lists. If I got a letter and didn’t have the time to open it, I’d place it on my desk. I intended to get to it later but didn’t always. Things eventually piled up.

At some point I wondered what would be the easiest way to things organized with the least effort. My guess was to leave things slightly more organized with each interaction. The best way to do this was to clean up one unrelated thing every time I had to find or place something. Over a long enough time period this would bring any arbitrarily messy space to a clean, organized one.

Another way to look at this is that the difference between clean and messy can be extremely small. It’s the guy who throws away his trash and also one piece of litter vs. the guy who doesn’t. Over a long enough time period these differences compound to create highly visible effects. In a physical sense this is about entropy. If entropy is increasing slightly for each time unit, everything will eventually be disordered. If entropy is decreasing slightly for each time unit, everything will eventually be ordered. Let’s call this entropic velocity.

So the difference between positive entropic velocity and negative entropic velocity can create highly visible, divergent, emergent effects over the right time scales. This concept probably only applies to things which are capable of accumulating state over time. If your immune system cannot fully fight off a pathogen, even if only by a little, you’re eventually going to develop serious, potentially life-threatening health problems (e.g. gangrene, sepsis). If you are extroverted and seek out meeting new people, you can grow your networks extremely wide over time. If gravity slightly exceeds the centripetal force from angular rotation, matter can condense into a planetary object.

This phenomena does influence some important human outcomes. If you are earning slightly more than you spend, you can accumulate wealth. If you are saving with compound interest, you can grow wealth exponentially. Over time scales of 30-50 years, the emergent effects can be profound (e.g. top 10% vs bottom 10% of wealth). There are probably other human outcomes for which this is true, and you probably want to be on the right side of entropy for it.

Preparing for your interviews — Leetcode is not enough

Introduction

The most common component of the software engineering interview is the algorithmic-style interviews. For algorithm interviews, you are given a problem whose solution is expected to be written in code within 45 min – 1 hour. The goal is not just to write a correct solution, but to solve it with better time/space complexity than a naive solution. You are generally graded on the performance and correctness of your solution.

If you’re preparing for your interviews, the best single resource is probably Leetcode. Leetcode is a platform for practicing algorithm questions marked based on difficulty and tagged by which companies have used them in real interviews. Like other platforms, Leetcode provides the questions and will check the correctness of your solutions according to test cases and run-time limits. Unlike other platforms (e.g. HackerRank), Leetcode’s customers are people practicing for their software engineering interviews.

In my experience interviewing as well as interviewing others, Leetcode is a good first approximation for practice because it implicitly tests for the end-goal of writing a solution that passes a battery of test cases with strict runtime requirements. However there are many components of the software engineer interview which can greatly affect your performance for which no amount of Leetcode practice will help you grow these skills.

What Leetcode is good at

As I mentioned Leetcode or a similar platform is perhaps the single best tool you can use. It will implicitly check for a combination of skills that are necessary to reach the end goal of writing correct code with strict runtime requirements.

Coding

You can’t solve an algorithmic question without being able to code. In this sense, Leetcode will help you answer an important question: are you capable of coding at the most basic level of proficiency?

Algorithmic Analysis

You also can’t solve an algorithmic question without having a solution of the appropriate time complexity. The test cases will often include extremely large inputs, so if you have an O(n^2) solution, it will have very clear timed runtime differences from an O(n log n) solution. In this way, you can clearly see if your solution is timing out of test cases, which would suggest you are not solving the problem with the appropriate time complexity. If you are capable of passing all test cases, this is a sign that you have devised a solution with the appropriate time complexity.

Problem Solving

Naturally algorithm questions aren’t about memorization. They involve higher-level problem solving skills where you combine your knowledge of data structures and algorithmic analysis to come up with a general algorithm which solves the question in the appropriate time complexity.

Data Structures

Without use of the correct data structures you may not be able to solve problems at all. For example, there is no good replacement for priority queues / heaps. Since different data structures have different time complexities for their operations, knowledge of data structures and algorithmic analysis are complimentary skills, both necessary for solving problems.

Edge Cases for Correct Inputs

Code correctness is paramount for any solution. You want your function to work properly for any appropriate input that it may receive. If it well-formed input causes incorrect behavior, then your code is incorrect. Leetcode is pretty good at including test cases that include coverage of edge cases. If you can write code that passes Leetcode’s test cases on the first attempt, you can probably handle edge cases in your interviews.

What Leetcode is bad at

In my experience interviewing as well as interviewing others, there are many aspects of the interview that are not covered by solving Leetcode-style problems, especially at selective companies. This is unfortunate because there is no other resource I am aware of that will help you practice these skills. These things are usually not explicit grading criterion but nonetheless can hinder your progress towards writing correct code within the time constraints.

The thing that ties together all of the things that Leetcode is good at is that it’s easy and scalable. It’s not a coincidence that Leetcode questions involve no ambiguity about correct inputs/ouputs, grading is easily quantifiable, and that everything is extremely structured. Leetcode is not testing the full range of what’s graded in an interview but rather they are testing a balance of what is important and what is easily measured. Leetcode is a business after all, and they implementing features that bring the highest ROI. I say this to underscore that there are clear reasons that Leetcode (or any other platform) exists in exactly the way it does. As you might expect, the things Leetcode doesn’t help you practice with are harder to grade through software in a way that a human grades it:

Requirements Gathering

In a real interview setting the question is often deliberately underspecified. What is expected of you is to properly scope out the requirements of the problem — the inputs, outputs, what to return in special cases, and coming up with good examples to understand the question. In a professional setting these skills are extremely important because mistakes of this nature are so costly. Because this normally happens in a dialog between interviewee and interviewer, it’s not easily automated in code.

Code Quality

Code quality is hard to quantify in an algorithm but easy for humans to understand. Most code in production environments is written once and read many times. Code can often last years, or even decades until it is removed or deprecated. Writing easily understandable code is a valuable skill because there is a shared understanding that clean code is easier to maintain. There are normally many ways to write code and you want to do it in a way that minimizes complexity, state, and lines of code given the constraints of your programming language.

Testing Bad Inputs

Since writing correct code consistently in production environments is an ideal and not a reality, handling error cases is important. Occasionally you may be asked during interviews to deal with these error cases (usually bad inputs), such as error cases and value errors. You may also be asked to handle these errors with exceptions if your language provides exception handling.

Familiarity with the Language

You are expected to understand the language you are interviewing in very well. You need to know how to import the correct libraries, what libraries are available to you, what methods/properties are available for built in classes, string processing, sorting, etc. This extended list is pretty important and I may elaborate on it in another blog post. But the short version is that you need to know your language well enough to never need to look anything up, because you won’t have that opportunity in an interview anyway. And if you don’t know your language well and fail to use built-in functions or libraries, your interviewer won’t be able to disambiguate that from writing unclean code.

Manual Testing

Manual testing is how you test your code when you don’t have access to a compiler. This is an important skill because the first line of defense against introducing bugs is understanding how you code works by looking at it. Due to the high cost of bugs in production environments (in terms of downtime, degrading product quality to customers, engineering time investigating issues, etc.) your ability to find bugs on your own is highly valued. The goal of manual testing is to execute the code like a manual debugger would, going through the code line by line and keeping track of state. Many mistake manual testing to be testing the algorithm design when it is actually about testing the algorithm implementation. This is a very common part of the interview for people to make mistakes, and these mistakes happen when they execute their mental model of what the code is supposed to do rather than executing what the code actually says.

Whiteboarding

In an in-person interview you will likely be coding on a whiteboard and not a computer. There is no good reason for this except network effects — whiteboards were used for interviews when computers were still expensive (or company budgets were low), and now we’re all stuck with it like the QWERTY keyboard. This is regrettable for a number of reasons I outline elsewhere, but what is important to keep in mind is that you are practicing for your interviews on a computer but using a totally different medium for transcribing information in real settings. With whiteboards your transcription speed is slower, you can’t shift lines down, shift characters right, or copy-paste. You will find that whiteboards are far less efficient than keyboards and because of this you will need to practice whiteboarding in a way that minimizes the time spent making edits.

Writing Code from Scratch

When you have access to a compiler, like for many phone interviews, you will often be writing code from scratch. For languages like java where your functions may need to be static public with a main function, this can be an important thing to practice. Make sure you can always write your solutions form scratch. Leetcode provides a function signature because they need to grade you against an objective API, but in a real interview setting that function signature hooked into a main function will not be provided.

Conclusions

Leetcode is a great tool for interviewing but it has fundamental limitations since everything is graded by code, not humans. This means Leetcode may help you build skills that are easy to test in code but will have poor coverage of skills that are hard to evaluate with code. Leetcode tends to be good at improving problem solving, algorithmic analysis, data structures, handing edge cases, and basic coding proficiency. Leetcode tends to be poor at requirements gathering, code quality, error handling, language familiarity, manual testing, whiteboarding, and coding from scratch. If you are preparing for your interviews it would be good to have these weaknesses in mind because it means you will need to consciously work on them as they are not implicitly tested by solving Leetcode-style problems.

How doctors acquire knowledge

The way doctors accept new ideas is very different than in other fields, like engineering. Nowhere was this more apparent than Twitter in January through March, 2020. With the worst pandemic in the last 50-100 years spreading, you’d think epidemiologists and physicians would be the first to sound alarm. They weren’t, not even close. Worse yet, many public health experts derided those who were first because what they were saying contradicted officially sanctioned knowledge. This got me thinking, why would doctors be the last group to learn something which was within their own expertise?

Doctors face unusual risks practicing medicine. If they do something not professionally-sanctioned and it harms a patient, they can be sued for malpractice. If they fail to do something professionally-sanctioned that could have helped, they can be sued. They can also be sued for no reason at all, or due to rare, unpredictable events. A jury of your peers may reduce bias but they are not equipped to evaluate the quality of medical judgment. Juries are considered unpredictable and best avoided even at a high settlement price. Avoiding lawsuits is the game for doctors. These risks are typically not there in other fields, which allows other professions to embrace a culture of learning through failure and risk-taking.

Knowledge in medicine is propagated in a legal-risk-minimizing way. The topology is top-down hierarchical.

  1. Professional, official, and academic organizations (e.g. WHO, AMA, CDC, Harvard Medical School, etc.)
  2. Medical textbooks which are compilations of many research studies and meta-studies.
  3. Researchers studying the subject matter.
  4. Practicing physicians whose experience or training overlaps with the subject matter in question.
  5. Practicing physicians whose experience does not overlap with the subject matter.

This legal-risk-minimizing way of propagating knowledge doesn’t necessarily promote true information, but rather information that has the widest level of consensus among the most knowledgeable experts. If one doctor is wrong, that’s malpractice. If they’re all wrong, that’s just medicine. The way doctors acquire knowledge may be legally safe but predictably has negative secondary effects. Independent thinking is systematically discouraged which allows false information, even ideas which defy common sense, to persist much longer than necessary.

Coderpad > Whiteboarding

The standard software engineering interview involves 3 to 4 technical interviews lasting 45 minutes to 1 hour each. What always confused me is that these interviews are generally done with marker on a whiteboard. It’s 2019, we have options. Those options are solutions like Coderpad, basically a screen-shared REPL (read-evaluate-print loop).

Typing speed is 5-6x faster than writing speed

The average typing speed is 75 words per minute. The average writing speed is 13 words per minute. Slower transcription speeds means less information can be conveyed per unit time.

Whiteboarding is space constrained

You can only write so much on a whiteboard. If you aren’t tactical about how you approach whiteboarding (e.g. small writing, good placement of functions), you can run out of space. If you are forced to erase prior work, but later realize it needs revision, you must re-write it. Combined with slow transcription speeds, this will cost precious time and attention.

Typing allows flexible edits

The largest pain point of whiteboarding is making edits. If you have to make an edit in the middle of a line, what you write may not fit. You may be forced to erase previous lines you wrote to make new space for something you forgot. Combined with slow transcription speed, you are doubly penalized if you make a mistake that involves erasing prior work.

Typing never has this problem. Text within a line is offset as you type new characters or newlines. You can copy-paste if need be. Your attention can be directed more fully on the problem rather than tactical placement of functions and expressions just-in-case.

Coderpad iterations are faster

Writing on a whiteboard means you must check for errors manually by walking through the code with simple examples. This is a slow, error prone process and naturally takes up a lot of time in an interview where time is naturally scarce.

With coderpad, you can immediately know whether your solution works for your test cases allowing you iterate faster.

Conclusion: Coderpad allows for more information density than whiteboarding

There’s really no getting around the fact that you can do more in a Coderpad-style interview than with a whiteboard. You can transcribe information faster, edit information faster, and iterate faster. There are only two reasons I can think of where whiteboarding would have a serious advantage:

  • Whiteboarding will never have technical/IT issues. Whiteboards never fail. Markers are usually redundant.
  • Whiteboarding has network effects. This basically makes whiteboarding the QWERTY of interview media.

To me the advantages of whiteboarding are not compelling. Whiteboarding limits the ability of the candidate to translate their ideas into code relative to Coderpad-style interviews and unavoidably filters candidates by an unpractical skill (tactical whiteboarding). Even a text-editor would be strongly preferable to whiteboarding.

The magic of object-oriented programming

My original programming style was what you would call procedural programming. Programs written in a procedural style typically consist of local data structures and module-level functions which operate on that data. This is how you would typically write programs if you weren’t familiar with object-oriented programming.

In my first job my manager once asked me to re-write an entire project into an object-oriented style because he felt my procedural style was less readable and less maintainable. It was confusing at first but he undeniably had a point — throughout my entire career the more object-oriented the software design was, the more readable and maintainable it was.

The primacy of object-oriented programming as a fundamental skill for writing clean, maintainable code is so widely accepted that there is often a special interview format to cover the ability to write code in an object-oriented style (e.g. “design blackjack”). The ability to write clean, maintainable code and proper class design are essentially mutually inclusive.

The importance of object-oriented design is not really that intuitive.

It always stuck me as odd that object-oriented design just so happened to be obviously superior to the procedural style. Object-oriented design is just one of several logically equivalent styles of writing code, the bits still come out the same in the end. But from my own experience it was as if there was some physical law of software that gave objected oriented design magical properties related to maintainability and readability. This is interesting because object-oriented design was never made for this purpose — it was just a way to bundle data and operations on that data in the same data structure.

I now think that object-oriented design actually does have a few, accidental properties which make it especially suitable for human-readable and human-maintainable code.

Object-oriented programming forces you to program to interfaces, not implementations.

In a procedural style, you write your functions according to the structure of the data you are processing. In an object-oriented style, you can only interact with the public interface of the class which often hides the implementation details of its underlying data. This means that as the private implementation of code changes within a class, downstream dependencies need not change.

Coding to interfaces also allows you to make objects swap-able as function arguments provided they are derived from the same base class. In software, this is called polymorphism. Polymorphism is a powerful tool that allows us to use generic programming within class hierarchies.

Object-oriented design maps well to human abstractions

Practically speaking, there is no inherent reason why people tend to prefer object-oriented design over a procedural style if they are logically equivalent. It just so happens that humans are better are recognizing/understanding abstractions and worse at raw information processing. Most people can only keep five pieces of information in their short-term memory but find object-recognition second-nature. Object-oriented design better takes advantage of the kinds of processing the human brain is good at.

Object-oriented design forces modular design

There is a concept called the single-responsibility principle which means the scope/responsibility/function of some module (package of code) should be as narrow as possible. Ultimately this means that any complex system will need to be made up of many independent, interacting modules. It turns out that it is far easier to make and maintain complex systems with simple, independent components than with a single, large, complex module.

The nature of objects naturally lends to modular design, where each object functions independently from the others and is shielded from their implementations as any interactions happen though their public interfaces.

Procedural programming offers none of the advantages of object-oriented programming

The reasons why procedural programming is considered less maintainable are similar to the reasons object oriented programming is considered maintainable.

First, procedural programming forces you to program to implementations, not interfaces. Since functions are operating on the raw data, if there is a change to the data, you must change every function which operates on that data. This could be many functions and these functions could be in many places.

Second, the human brain doesn’t think well in terms of manipulating raw data and keeping a complex workflow in memory. For a logically equivalent program, a procedural style is harder for a person to reason about.

Third, there is no enforcement of modular design in procedural programming. There is no concept of encapsulation. Often all data and functions exist within the same scope or namespace. There is no concept of logically grouping related functions or data because this has no bearing on how to process data. At best, sufficiently modular design in procedural programming will resemble classes but with weaker encapsulation.

As a guideline, use the object-oriented style as the default

Since employees are typically the largest expense in a software company, it’s good practice to optimize your code for human readability and maintenance. Time is money, and more time spent trying to understand code is a real cost.

I would invest in Triplebyte if I could

As a software engineer with a non-traditional background, you get a better appreciation for the short-comings of traditional hiring practices. For example, it was significantly harder to break into the field of software engineering with any first job than it was to get offers from Facebook and Microsoft. It was so hard to break into software that it was easier for me to pass interviews than it was to get them. In my initial job search, it took nine months to get two interviews without referrals.

Why are things like this?

The interview process involves successive, increasingly targeted filters. The intent of this is to weed out candidates in a cost-effective fashion while minimizing false positives. The first filters discriminate on group attributes, not individual ones. Those two filters are recruiters and applicant tracking systems. They are two sides of the same coin, as they perform the same task: evaluate a candidate based on their resume.

What are the effects of filtering based on resume?

Resume evaluation is about evaluating groups, not individuals. Resume screening involves looking for attributes that are associated with higher interview pass rates. These attributes may include:

  • a degree in computer science
  • having gone to a prestigious institution
  • having previous job experience it he same role
  • having worked at a prestigious company

Another way to look at resumes is as a signaling mechanism. Signaling is just a proxy for skills and are actually statements about groups, not individuals. A good resume is just good signaling, and a bad one lacks that good signaling. A non-traditional candidate, by definition, will lack good signaling. Another way of thinking as a non-traditional candidate as them being from a low interview-pass-rate group.

Now it should be clear what the problem is. If the initial filter to screen software engineers is based on group characteristics (i.e. signaling), then non-traditional candidates will be disproportionately rejected prior to the evaluation of individual skills. For the industry in general, this is not really a big problem, as non-traditional candidates are relatively rare. For non-traditional candidates themselves, this is a serious, career-altering problem.

So where does Triplebyte come in?

Triplebyte acts as a third party for which companies can outsource all screening prior to the onsite. The differ from recruiting firms in that they actually interview candidates and pick out the best performing ones. This process works because people who pass Triplebyte’s screening process have a higher pass rate at the onsites (~60%) than the companys’ own internal screening process (~30%). What differentiates Triplebyte’s process is not their selectivity but that they are resume-blind.

Triplebyte’s secret sauce is that they evaluate candidates exclusively on individual characteristics, not group characteristics. Their initial filter is a standardized test of concepts in software engineering. As you may know standardized testing is the most unbiased, objective way of grading skills that is known. The final filter is a two hour technical screen by a senior engineer covering four major categories: coding, knowledge of computer science concepts, debugging an existing codebase, and system design. This final screen is so rigorous that I initially failed it while receiving offers from Facebook and Microsoft.

As someone with a non-traditional background, Triplebyte’s method is the proper solution to connecting unrecognized talent with software companies that normally screen based on a resume.

What I wish I knew as a physics major

Master a book on applied mathematical methods used in physics early

Physics classes teach the physics, not the mathematical methods required to solve the problems. Sometimes they will dedicate a small amount of time to mathematical methods, but the limited time available forces them to prioritize the physics.

Physics books have a similar problem: they teach the physics, not the math. These books are rarely self-contained. This is due to the fact that physics more-or-less depends on all applied mathematics and is not really able to restrict itself to a specific domain of mathematics. Instead, any physics course can require arbitrary methods in probability, statistics, linear algebra, infinite series, multi-variable calculus, vector calculus, complex analysis, fourier transforms, integral transformations, differential equations, etc. The consequence of this is that it’s implied that the reader already be acquainted with the mathematical methods, and physics books typically don’t specify the pre-requisites to working through the book.

Physics departments largely leave the responsibilities of teaching math to the math departments, but the math departments are not equipped for this task. Rather, math departments will teach at a level appropriate for the average student taking each course. This is to say if most of the students in a linear algebra course are economics majors, your linear algebra course will be mostly unrigorous for physics students and won’t introduce them to the mathematical methods that they require for their physics courses. Also, math departments tend to lean towards proofs and not applied techniques, so your linear algebra class may be more oriented towards vector spaces and proving properties in linear algebra rather than optimizing for the applied approach that physics students need.

So what are you to do? Unfortunately, you can’t count on getting the right education from the institutions that are supposed to provide it to you. Instead, your best approach is to read a book designed to teach the mathematical methods required for the physical sciences. The first and most famous book with this intent for undergraduates was made by Mary Boas and has the apt name of Mathematical Methods in the Physical Sciences.

Personally I’ve found that the coverage of Mathematical Methods in the Physical Sciences was better preparation for physics than all of the math courses I ever took combined. Boas’ book does not focus on proofs like math courses and is oriented specifically towards students in the physical sciences who need a rigorous foundation in applied mathematics.

It’s not enough to just read this book, you also have to do the practice problems. There is a world of difference between following a derivation explained to you and creating your own. These books should also have a companion book for explaining the answers to the practice questions, not just with correct answers but actually with all the steps. This greatly reduces the feedback loop of not understanding how to use these methods and is arguably where the most important learning occurs.

The earlier you master these methods, the more time you’ll save. Preferably you’d master this within your first year of studying physics, and hopefully even before that.

Master Mathematica

Mathematica is perhaps the best software package for symbolic computation (aka computer algebra). Symbolic computation allows you to use a computer to transform or reduce mathematical expressions. Symbolic expressions are more-or-less the language of physics. Everything is math, but that math is expressed in terms of variables such as x, y, cosine(z), e^{a+b}, etc.

Mathematical expressions in physics can be very large, often requiring more then ten terms a piece, complicated nested functions, lots of subscripts/exponents, and many variables. Reducing these expressions by hand can be very error-prone. Complex expressions are sometimes chosen such that they reduce to a very simple form, at which point you know you’ve finished. Other times, they reduce to some form that cannot be transformed further. If you make one small mistake, you could change a nicely-reducing expression into a much more complex one. Doing this not only makes your answer incorrect, but also acts as a timesink, where you labor down the wrong path working on expressions much more complicated than were actually intended.

Having a computer check your work is invaluable. It ensures correctness, saves time, and makes it easier to understand your mistakes quickly. Short feedback loops allow you to learn faster.

Get at least a minor in mathematics

Physics majors are supposed to be majoring in math, or at least getting a minor in it. This holds regardless of what is explicitly said because there is not enough time for physics courses to teach the mathematical methods required. Most physics programs will have a minimum requirement of math courses for the major, but practically speaking you should go above that requirement. For your benefit, you should take rigorous courses in the following: single-variable calculus, multivariable calculus, linear algebra, complex analysis, differential equations, fourier transforms, and maybe probability. You want these courses to emphasize the applied aspect (using techniques to solve problems), not the pure aspect (using proofs to prove major theorems).

If your math courses are not rigorous, that can be a huge problem. You will save yourself more time by taking a time-intensive, rigorous math course and solving your physics problems faster than taking an easy, unrigorous math course and banging your head against the wall on all your future physics problem sets. In the event your course is not rigorous enough, that’s time that should be spent studying mathematical methods in that field while you still have the time. When your workload becomes harder with your future physics courses, there will probably not be enough time to compensate for any unpreparedness.

Consider typing your homework

Homework is typically done by hand. Because showing your work is required and expressions can be very long, your homework sets will be extremely long. They may be 20 pages, largely comprising long expressions being duplicated with single transformations (to make the work easier to follow). If you type your homework, you can leverage copy-and-paste to save the time it takes to create your assignment and more easily recover from realizing that problem 8c requires a much longer derivation when you’ve only allocated 6 lines. This will save you from possibly developing a repetitive stress injury from constantly writing.

I recommend Mathematica for typing homework. Since it also provides symbolic computation, you can check your work in the same software you use to type your homework. This is a huge advantage over writing on paper where you can’t automatically check your work. Alternatively, you can use LaTeX, the premier typesetting program for physics and mathematics papers.

I’ve used both Mathematica and raw LaTeX to typeset homework, and prefer Mathematica due to the ability to check my work. LaTeX looks more professional but has a high learning curve and is more tedious to use given that you have to constantly compile the program to see the formatting of the document. One typo in LaTeX and things won’t render properly. If you manually trigger compilation, you can cause a repetitive stress injury since you will constantly be pushing the same pattern of keys. With Mathematica it’s what you see is what you get.

Advice for CS majors in college about uncertainty

I had a girl reach out to me on LinkedIn asking about an internship at Facebook. There wasn’t really much I could do given that there are formal channels for applying for internships and a referral by someone who does not know you carries little weight. I took a look at her resume and we had a conversation. The following was from the tail end. She expressed concern over the uncertainty surrounding the decisions she needs to make and my thoughts here apply to a broader audience, tailored specifically for CS majors in college.

Continue reading “Advice for CS majors in college about uncertainty”

Analysis of a cryptocurrency token whitepaper: scatterpass

Introduction

I have friends in the cryptocurrency space who ask for my opinion on the viability of certain projects from time to time. There’s a lot of money in initial coin offerings (aka token sales) and generally the first piece of information available from which to make the decision to invest is a single whitepaper. These whitepapers in the cryptocurrency space were inspired by the original Bitcoin whitepaper which preceded the actual implementation and explained the mechanics of the blockchain. These whitepapers are considered authoritative references on the bigger questions about the coin’s existence and technical architecture, and are usually focused on high level details.

Continue reading “Analysis of a cryptocurrency token whitepaper: scatterpass”

What makes a good private cryptocurrency

Introduction

Among cryptocurrencies privacy is a major category of features for which there is a real need. When I say privacy, I’m referring to people who depend on their anonymity for their safety. The canonical examples include journalists, dissidents, political activists, and criminals. These groups are frequently targeted by governments, known to be the most powerful kind of adversary with incredible resources. For this reason, the gold standard of privacy is privacy from governments. Obviously, if you are private from governments, you are also private from everyone else, meaning that you don’t have to be targeted by a government to find utility in privacy.

Continue reading “What makes a good private cryptocurrency”