Skip to main content

· 2 min read

At Kafka Summit London 2023 some of us will be going on a run (or walk-- that'll be my speed!) and all are welcome!

Here's a picture of the participants from last year's KSL Fun Run:

Group of twenty or so people wearing running clothes and smiling.

Hopefully we'll see a crowd with both new and familiar faces this year!

The Details:

Who's invited: Anyone who would like to! We will have an experienced runner taking the role of lead runner, and I will be taking the 5k at a walking pace at the back. No matter your experience level, you'll have company!

Where will we start: We'll start and end at the Aloft London hotel. We've plotted a route here that can be broken up into sections. If you don't think you'll make it in an hour, you can turn back over the bridge. Otherwise, you can do the last 1.5km.

What time? We'll meet at 6:30am in the hotel foyer for pictures, and then take off at 7:00! If you're walking, it won't take more than an hour, so you'll be back at the hotel in time to get ready for more conference fun!

Ok, what day though? May 16th!

No Registration Required

This is not an official run so no sign-up is required! Just show up if you're feeling like exercising away some of that morning fog or jetlag :)

· 6 min read

Over my career I've had to learn new things many, many times. I've been an elementary school teacher, a digital marketer, a software engineer, and now I'm a full-time developer advocate. Over and over again, I've had to learn completely new concepts. It's great; it keeps me humble, it keeps me happy. There's a lot of dopamine involved in learning. But one of the most difficult parts of learning is knowing what you don't know. And knowing what you don't know is important, because it keeps you from making the types of mistakes that emanate from errors in judgment.

I've learned a couple of things about defining the limits of my knowledge about a concept. I'm sharing my framework here in case it's helpful to anyone else in tech.

Stage 1: Nothing

The first stage of getting to know a new concept is the easiest to define. I'm starting from nothing!

This is great. I know exactly what I don't know. Let's say I'm completely brand-spanking new to the concept of an API. Here is a diagram of what I don't know about APIs:

image of my brain, outside of a circle labeled 'everything about APIs'

It doesn't get more accurate than that. I'm not kidding. "What you don't know" will surprise you again, and again, and again. The framework I'm suggesting here will not save you from being wrong in front of people. But I think it will give you some general guidance on how to approach learning new things.

Stage 2: One Instance of a Concept

So let's consider universal concepts. Definitions that we all agree on. For example, an API according to Wikipedia is

"An application programming interface (API) is a way for two or more computer programs to communicate with each other."

Maybe that's not the most complete definition, but I think most techies would agree on what's there. Now, you can memorize this definition, but you won't have experiential knowledge of an API until you've met one out in the wild.

Let's say I'm at that stage. I'm getting used to querying a REST API, and I'm learning to build one myself with Python or Node Express or something. This definition applies to the REST API I'm building, so I'm gaining experience with one type of API. One particular instance of this universal concept. Here is another map of my brain:

circle with my brain in it, just touching of two intersecting circles labeled 'everything about APIs' and 'everything about REST APIs'. An arrow points to the intersection of those two circles

So, I'm learning something about REST APIs. And that's it for now. The arrow is pointing to a boundary that I'm not aware of:

I don't know what's common to REST APIs and all APIs, and what's different.

So, I've seen one particular instance, but I've not seen another instance of an API. And this is a problem because there are different kinds of APIs underneath the universal concept of "API". Like there are different kinds, or species, of cats underneath the "cat" genus. And tigers are very different from lions.

Stage 3: Two instances, of different kinds, of a concept.

Say I start implementing not a REST API, but a GraphQL API. It's a different kind of API from a REST API, like lions are a different kind of cat from tigers. My worldview on APIs begins to break down. It's destroyed by differences between REST APIs and GraphQL APIs, like

  • GraphQL is an application layer, while REST is a style of API. Not every API is in the REST style!
  • GraphQL requests are JSONesque, while REST requests are often parameterized in URLs. Not every API uses parameters the same way! Or has the same request format.
  • You define GraphQL with a schema, rather than a list of endpoints. Not every API is defined the same way!

circle with my brain in it, just touching of three intersecting circles labeled 'everything about APIs' and 'everything about REST APIs' and 'everything about GraphQL APIs'.

I know that diagram just got complicated, I'm sorry. But this is the most important stage to take notes at. Why? Because when you realize what assumptions you've made, you realize what questions to ask the next time you encounter a new kind of API.

These are the assumptions I've made:

  • All APIs are in the same style.
  • All API requests are formatted alike.
  • You define APIs in the same way.

Now that those assumptions have been broken by my experience, I know what questions to ask next time I'm learning a new kind of API.

Stage 4: The third instance of a different kind.

Ok, say I'm learning what a tRPC API is in order to implement on at work or something. Based on the kinds of assumptions I made last time, what might I ask now?

  • How does the style of tRPC compare to REST and GraphQL, the ones I'm familiar with?
  • How are tRPC requests formatted?
  • How do I map or define a tRPC API?

These questions will help me understand what a tRPC API is much faster than I understood what a REST or GraphQL API is.

Furthermore, I'll be understanding the general concept of an API even better, because I'll be understanding what is in common between all these kinds of APIs. I'll also be understanding GraphQL and REST APIs better at this stage, because I can then make lists of their limitations.

Generalizing this process

I think every time you learn something new, you have to go through these 4 stages and there's not really a way around that.

But if you do it consciously, it speeds up the process of learning.

So when you're beginning, acknowledging "I know nothing about this. I should ask someone who knows something about this what is the best instance to build first," can save you some pain.

Then again, once you've built your first instance, acknowledging "I might be making assumptions here that don't apply to other instances. I should ask someone who has built other kinds of instances what the differences are," will help you make decisions about whether to learn a new paradigm when you're building your next instance.

· 4 min read

Recently I've joined the developer advocate team at Confluent, which is full of highly experienced speakers who have mentored me as I craft abstracts.

I'll be honest; when I began writing abstracts I thought, "How hard can this be? I've written plenty of blog posts, technical articles, and the occasional haiku. Abstracts will come naturally."

Reader, they did not! The art of writing and fine-tuning abstracts is challenging, but learn-able. Luckily, I've gained a lot of knowledge from my teammates and now I feel a lot more comfortable with writing abstracts. I wrote this post to hand on a few of the things I've learned.

1. Connect with your audience in the first sentence.

Let's start with this abstract that I've written up for the purpose of this blog post:

"Come to my talk about choosing React frameworks. We'll learn how and why to choose the framework that suits your web development needs. You'll learn criteria for choosing a web development framework and how to apply them. By the end of my talk, you'll know more about the React ecosystem and have the tools to get the job done."

The first sentence, "Come to my talk about choosing React frameworks," is a nice invitation but it doesn't really hook the reader. In order to connect with the audience, it's a good idea to start by mentioning their pain point. In this example, a good first few sentences might be more like the following:

"The number of React frameworks in recent years has reached an overwhelming height. Social media debates run fierce. There's only one consensus: choosing the right framework for the job is of paramount importance. But how, exactly, do we pick a framework?"

2. Position your pronouns thoughtfully.

Take a look at the abstract once more.

"The number of React frameworks in recent years has reached an overwhelming height. Social media debates run fierce. There's only one consensus: choosing the right framework for the job is of paramount importance. But how, exactly, do we pick a framework? We'll learn how and why to choose the framework that suits your web development needs. You'll learn criteria for choosing a web development framework and how to apply them. By the end of my talk, you'll know more about the React ecosystem and have the tools to get the job done."

There's "we", "you", and "my" here. Consistency is key in all writing, but for talks, you might choose "we" over other options to reflect a sense of comraderie with the audience.

3. Let your solution for the audience's pain point be clear.

Currently, the solution that the speaker offers is vague:

"We'll learn how and why to choose the framework that suits your web development needs. We'll learn criteria for choosing a web development framework and how to apply them. By the end of the talk, we'll know more about the React ecosystem and have the tools to get the job done."

In fact, all you can really tell is that the speaker is offering some kind of solution. There are no hints as to what it might be.

Here's a better way to express the speaker's intention:

"We'll distill the criteria for selecting a React framework into three crucial questions. Then, we'll walk through a few use cases together to garner some experience making these decisions. What does the decision making process look like for building static portfolio sites, large e-commerce sites, and mobile game apps? By the end, we'll feel ready to critically appraise React frameworks, familiar or unfamiliar, for our own projects."

This gives some detail ("three crucial questions", and the use cases) without giving everything away. It also communicates the value to the audience: a confidence in their choice of framework.

Now the whole abstract reads:

"The number of React frameworks in recent years has reached an overwhelming height. Social media debates run fierce. There's only one consensus: choosing the right framework for the job is of paramount importance. But how, exactly, do we pick a framework?

We'll distill the criteria for selecting a React framework into three fundamental questions. Then, we'll walk through a few real-life use cases together to garner some experience making these decisions. What does the decision making process look like for building static portfolio sites, large e-commerce sites, and mobile game apps?

By the end, we'll feel ready to critically appraise React frameworks, familiar or unfamiliar, for our own projects."

I think this version sounds a lot more interesting and clear, don't you?

· 6 min read

A couple weeks ago I started a brand new role: Developer Advocate at Confluent. So far I’ve been gladdened by stepping fully into an area of tech that satisfies my teaching heart. I’ve also felt supported by my new teammates. I’m new to the Kafka scene, with a background in JavaScript and GraphQL. In order to help onboard me smoothly, my teammates are meeting with me frequently, checking in on my progress (they even reviewed this article 😉), and are helping me select material from Confluent’s abundance of resources to learn Kafka.

My first step in my learning journey was this course on Kafka 101. I took notes throughout the course, which broke Kafka down to its core concepts. I'll share the synthesis of those notes below in hopes that if you're coming from a similar background, you'll find it useful!

What is Kafka?

According to its website, Kafka is a "distributed event streaming platform".

Well, what does that mean to me, a mostly-JavaScript developer with a background in web development and GraphQL? My usual approach to learning entirely new concepts is to take a back-to-square-one approach and understand some fundamental vocabulary first. Much like you might estimate the result of a math problem before tackling its execution (e.g., estimating that the sum of 219 + 38 will not be above 300), I like to situate myself with respect to the context of a new skill before executing it. The following paragraphs set the context of Kafka by defining and illustrating key concepts.

What is an Event?

So, a "distributed event streaming platform". The key piece of terminology here is 'event'. Before I took the course, I understood an event as a 'thing that happens', which still holds up within the context of Kafka. In Kafka, events are things that happen, and the representation of these things are recorded by machines.

Let’s take, for example, the event of a credit card transaction. The event is composed of a notification and the event’s state. To implement this event in Kafka, you’d need certain pieces of information, including a key, value, timestamp, and optional metadata headers. So you’d have something like:

Key: “Credit Card Payment” Value: “Paid $12.50 for a ham sandwich” Timestamp: “05/01/2022”

Kafka serializes these pairs into a structured format like JSON, Avro, or Protobuf. By default, event sizes in Kafka are 1MB.

screenshot of page using a diamond shape to illustrate the above words

What is a Topic?

screenshot of page using a rectangular shape to illustrate the above words

At a high level, topics are ways of organizing events. They are programmatic logs of events, as opposed to application logs. You can see this article by Jay Kreps to learn more about the difference. Logs have 3 noteworthy aspects:

They are append-only. New messages are only applied to the end of a log. They can't be inserted—unlike with a different structure (like a graph). You can't change events in a log. This was important for me to wrap my head around since I was coming from a GraphQL background, and you can certainly mutate data objects using GraphQL. Access to logs is sequential from a given offset, so they're read by picking an arbitrary offset and scanning.

What is a Cluster?

Ok, cool, but where are these events and logs kept so I can access them? Are they hosted somewhere? Kafka's storage layer is a cluster of things called brokers. Brokers are servers that handle things like write and read requests to partitions, as well as replication. Which brings us to:

What is a Partition?

screenshot of page using a rectangular shape to illustrate the above words

Logs are hosted and replicated across a cluster of machines, rather than clogging up one machine 🪠. Partitions make this possible. They take a single topic log and break it up into multiple logs.

Now, how are messages written to partitions? If the messages are keyless, they'll be assigned one after the other until the last partition is reached, in which case the process starts from the top. Messages which have keys are assigned to partitions holding messages with the same keys. Kafka does this by hashing the key and using a mod function. If you want to take a deep dive into partitions, you can read this blog post by Ricardo Ferreira that examines them from the perspective of both developers and data ops.

What is a Producer?

The producer sends the key value pairs to the clusters, managing the interaction between brokers over a network. Incidentally, producers are also responsible for message assignment to partitions. You can view a producer instance created with the Confluent client in Python in the Confluent Developer website.

   # Parse the configuration.
# See https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
config_parser = ConfigParser()
config_parser.read_file(args.config_file)
config = dict(config_parser['default'])

# Create Producer instance
producer = Producer(config)

What is a Consumer?

A Kafka consumer reads the data from the stream. Many independent consumers can read from one topic. When a message is read it is not deleted.

screenshot of page using a rectangular shape to illustrate the above words

Consumers handle scaling automatically. You can see how to instantiate a consumer instance in Node.JS on the Confluent Developer "Getting Started" tutorial.

function createConsumer(config, onData) {
const consumer = new Kafka.KafkaConsumer(
createConfigMap(config),
{'auto.offset.reset': 'earliest'});

return new Promise((resolve, reject) => {
consumer
.on('ready', () => resolve(consumer))
.on('data', onData);

consumer.connect();
});
};

Conclusion

Let's go back to the original definition of Kafka, a "distributed event streaming platform". We now know that an event is a representation of a thing that happens expressed in a key value pair. These events are organized by topics, logs which are partitioned by producers and read by consumers. The platform is a cluster of brokers that enable replication and fault tolerance. Together, this system is Kafka. The system arises from the viewpoint that data is not static -- that it's everywhere, flowing and interconnected. That doesn't mean you can't integrate streaming data with classic data sources like databases and REST APIs, or that you can't filter, transform, and manage it. But that's a topic for another day ;)

· 5 min read

The Big Picture

auth0 provides a solution to help you implement authorization (your users' level of access) and authentication (your users' identity) in the applications you build. You know how some website's navigate you to an authentication page that allows you to sign up and log in with your own email, or use Google? auth0 provides a smooth developer experience for implementing that type of functionality on your own application.

Note: if you're confused about authentication vs. authorization, auth0's docs feature a great analogy for remembering the difference. Authentication, or proof of identity, is like showing your badge to a security guard. Authorization, on the other hand, gives access to some resources but not others based on authorization level, like an elevator key sensor that gives you access to some floors but not others.

What kind of applications can you build with it? Well, auth0 provides solutions for several different types of apps.

Regular Web Apps

screenshot of page displaying a grid of regular web app options including Apache, ASP.NET, Django, and Express

Before the new wave of JavaScript apps that ran in the browser came along, most developers relied on apps that ran on the server. This is still the best way to implement many applications, especially those that require frequently refreshed data. auth0 has a generous amount of getting started resources for these types of apps, including tutorials for those apps that might combine server-side rendering with static site generation, like their Next.js tutorial.

Single Page Apps

screenshot of page displaying a grid of SPA web app options including Angular, React, JS, and Vue

auth0 also has support for SPAs (Single-Page Apps). SPAs use JavaScript in the browser to update content in a single page using APIs like fetch. This can improve performance, especially for websites like e-commerce pages and blogs, because the data doesn't have to be fetched from the server every single time a page is requested by a user. You can find tutorials for Angular, React, JS, and Vue apps on the auth0 website.

Native Apps

screenshot of page displaying a grid of regular web app options including Android, Cordova, Flutter, Device Auth Flow, and Angular

When an app runs natively, that means it is designed to run on a specific platform, rather than being platform-agnostic. For example, an iOS app is designed to run on an iOS operating system, and not an Android operating system. With auth0, you can get started and learn how to implement auth0 in native apps on multiple platforms, including iOS, Windows, and Flutter.

https://auth0.com/docs/quickstart/native

Backends and APIs

screenshot of page displaying a grid of regular web app options including ASP, Django, Laravel, and Go

You can implement auth0 to secure routes on many types of backends and APIs. This way, users who are not authenticated may not access private routes.

Which flow should I use?

auth0 offers different flows, or chains of steps in implementing security.

Client Credentials Flow

Now that you've been introduced to auth0 and the types of apps it integrates with, you might be wondering about different scenarios for using auth0.

For example, sometimes the person that needs authorization is not a person at all -- it's a machine! If you're not familiar with this type of scenario, think of CLIS (Command Line Interfaces), and similar services.

auth0 has designed a flow implementation for this called the Client Credentials Flow. A client credentials flow flow is appropriate when the client happens to be requesting access to requested resources which are also under its control, so when the flow is implemented, the client only uses the client credentials to request access.

Authorization Code Flow

auth0's Authorization Code Flow is a ten-step flow appropriate for server-side apps with end users who can provide login details and consent. Inside this flow, the Authorization Code is exchanged for a token. It can't be used for browser-side code because the Client Secret is sent in the flow, and that would be visible in the browser. It's also super safe since the token is passed directly to the Client.

Resource Owner Password Flow

This flow is not highly recommended, but it works in cases when the application can be trusted with the credentials. This is because this flow allows the credentials to be stored on the backend.

You can read more about it in auth0's resource.

Authorization Code Flow with Proof Key for Code Exchange (PKCE)

This flow is designed for SPAs and for mobile apps. Since these apps cannot store a Client Secret because it would then be exposed through either the browser or decompilation, auth0 recommends this alternative PKCE approach. The PKCE flow accepts a value from the calling app called a Code Challenge. It is a transform value of a secret called a Code Verifier, and without this original value an attacker would not be able to exchange it for a valid token.

Conclusion

I hope this post has given you a good big picture of how auth0 works! It's good to start with the type of application (Machine to machine? Mobile? SPA? Traditional?) and then identify the type of flow you need before you implement. If you have any more questions about implementation details, you can always head over to the auth0 docs. Of course, I'll be writing more blog posts on this topic in tutorial-style. 😉

· 4 min read

Dan Ott, Nick Taylor, Ben Holmes and I got together recently on a Twitter space to talk about JavaScript frameworks. It was a well-rounded conversation summarizing how to choose a framework, and the pros and cons of some of the more popular options. Below is a summary of what we shared.

My Summary of Our Conversation

What are some considerations to keep in mind when you’re choosing a JS framework?

It depends!

It’s important to keep in mind the end goal, apart from technology. If you have a client with a deadline, you’re going to need to use the solution at hand instead of spending lots of time exploring new solutions.

You have to provide reasons for the business value of your choices. If you’re working on a personal side project, however, you have more time to learn things for the sake of learning.

The other thing to keep in mind is what kind of website you’re building.

Content-heavy sites work best with a static site generator, whereas something like Spotify with a lot of client-side code might require a different type of framework.

It’s also important to keep in mind the goals of the project over time. Some people just pick the frameworks with the biggest applications to stay ‘safe’, while it might be smarter to start with the smallest functionality and build from there, to keep your website more performant.

This might look like, say, choosing Astro over a monolithic solution. It can also look like using an SSG like 11ty or Astro if you’re considering where you’re running JS as part of your considerations – they run on the client side. Or, you can start with Remix, which generates at runtime but is set up to do caching incredibly well.

Note: it’s not a community expectation that a frontend engineer knows the ins and outs of every single framework out there– for example, in an interview, it should be ‘good enough’ just to articulate the advantages and disadvantages of a few.

Note 2: The number of frameworks can become overwhelming if you’re new. Stick to learning one at a time, have patience, and eventually you’ll get a feel for where things have landed at your moment in time.

Why are we seeing JS frameworks encroach on more and more of the stack?

First of all, JS bundles are large and static and not easy to handle at scale.

Secondly, developers are beginning to request building routes partly statically, and partly dynamically, rather than building all routes statically. By moving JS to the server, you can stop shipping it and pass it to the client from the server instead.

For example, if you worked at a newspaper, perhaps you don’t want to statically deploy every single article from the last several decades every time you deploy routes. Caching the pages makes sense.

What is island architecture?

Island architecture allows the developer to ship less JS by volume. It’s as if static content were like oceans and content that requires shipping JS were islands.

For example, say you had a page with a banner and a carousel. You can load the banner statically, and ‘turn on’ animation and shipping JS for that carousel component only.

In Astro, nothing is turned on by default. That way, you only pay the cost of shipping the components that really need to ship JS.

One thing to note: since the advent of new technologies besides webpack (which is still powerful in its own right), it’s become a lot easier to create these new frameworks, which contributes to the recent burgeoning of them.

On hype: it’s fun, but also, some of the above questions about the purpose of your project are more important than just using the latest framework.

Another trend: edge computing

In edge computing, JS is shipping closer (literally, on a closer server) to the user, so that it becomes faster and more performant. With Netlify, it’s cheaper monetarily to use edge computing than not!

Framework vs libraries vs metaframeworks

Classically, a framework is fairly opinionated, while a library is a less opinionated set of features. A metaframework would be a framework built in some way off a framework. With respect to Vue, Astro is a metaframework, while with respect to React, Astro is a framework.

Resources:

Netlify Page Props in Next.js Resource

Anthony Campolo on Partial Hydration

Jason Format Application Holotypes Resource

Jason Format Islands Architecture Resource

Patterns Islands Architecture Resource

Island boy song

Astro.js

· One min read

This is my blog! I've organized the content by tags:

seeds - These are notes and half-baked thoughts.

sprouts - These are blog and talk drafts, or more rambling trains of thought that still have structure.

trees - These are fully fledged blog posts, or even essays!

There are other tags, too of course, but those will have to do with topics and less to do with the content's position in my garden.

Hope you enjoy clicking around. 🌱

· One min read

As far as I can tell, there's a paucity of resources outside of academia for helping people understand compilers. It gets worse when you consider the resources available that are aimed at JavaScript/TypeScript developers specifically. That's why I'd like to write a course called "Compilers For JS Developers".

Here's a tentative outline:

Section 1: What is a Compiler?

Explain what compilers are on a high level. Include lexical analysis, syntax analysis, and code generation.

Section 2: Let's Play With the AST Explorer

Create exercises in JS, HTML, and CSS for developers to complete.

Section 3: Compilers In The Wild

Break down what happens when you run a TS compiling command -- doesn't need to be compiled to be executable since browsers have a JS parser. TypeScript, however, as a superset of JS, needs to be compiled to JS!

Resources:

· 4 min read

Writing A Clear Code Example

If you're a developer advocate, you've probably written a code example or two. The purpose of writing a code example is completely different from the purpose of writing production code. "It's messy, but it works," doesn't fly. It has to work and teach other developers. At the same time, you might not show the extension of an app to the fullest degree, for modularity's sake. In my time at a small startup, I've lost count of the number of code examples I've created. I've made some mistakes and learned a few things along the way. Here are some lessons that I can share with you.

Keep the Visual Impact In Mind

Keep it clean-looking. Running prettier before pushing is important, but that's not the only thing to consider here. Say I were writing a sample to show how to retrieve information from a certain API. How can I improve the readability of this codeblock?

fetch('https://api.sample.com/v3/endpoint',
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
Authorization: 'KEY'
},
body: JSON.stringify({ body: 'data' }),)
.then((response) => response.json())
.then((response) => console.log(response))
.catch((err) => console.error(err))

Well, I could pull out headers and options, make them variables, and pass them into the fetch call.

const headers = {
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
Authorization: 'KEY',
},
}

const options = {
method: 'POST',
headers: headers,
body: JSON.stringify({ body: 'data' }),
}

fetch(
'https://api.sample.com/v3/endpoint',
options
.then((response) => response.json())
.then((response) => console.log(response))
.catch((err) => console.error(err))
)

As you can see, that makes it a lot easier to see what options you need to pass to the API endpoint to receive a response.

Comment Wisely

Let's say I was working on the same block of code. Even though this is a code sample, it shouldn't be necessary to comment on a lot of lines.

// these are the headers to be sent in the request
const headers = {
headers: {
// default is application/json
Accept: 'application/json',
//content-type is application/json
'Content-Type': 'application/json',
// send authorization from your account here
Authorization: 'KEY',
},
}

The comments are starting to make this hard to read and I haven't even gotten to the options variable yet.

If you know your audience well enough, you might know that they're familiar with sending headers to an API. I'm a fan of short documentation links in templates, just in case:

// https://www.linktoapidocumentation.com
const headers = {
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
Authorization: 'KEY',
},
}

Otherwise, I try to keep comments out of the template, unless I'm doing something that interrupts a well-known coding paradigm.

Know The Limits Of Your Sample

It's important to keep your sample clear by illustrating one concept at a time (unless, say, it's a sample for a livestream and you want to show multiple aspects of your product, then-- time to go all out!).

This modularity is something that I have seen clearly illustrated in Next.js sample code. In Next.js's form example, you can see on line 34 that the information on a form is displayed in an alert.

Now, an alert box is often considered bad form (haha) in production. However, the authors of this code sample have wisely used it as a way to keep the modularity of their sample repository intact. Sure, they could have gone on and shown how to use useRouter() and query params to pass this information on to a new page, but they decided to focus on showing how to use forms in Next.js, so they ended the functionality on this page with an alert box.

In Conclusion

What have we learned? Well, the style of your code sample is determined by your scope and audience. Whether or not you need to comment to explain your code depends on your audience's understanding of the general concepts you're illustrating an instance of. Also, how much functionality you highlight depends on whether you're writing an example for documentation or a codebase to walk through a livestream. No matter what though, keep your code neatly formatted and organized. The need for clarity never changes. 😉

· 3 min read

Using Visual Thinking to Teach

How would you teach someone to draw a house? What would you say? I might write something like this.

"Draw a horizontal line. Draw two lines of the same length, starting from the ends of the original line, and perpendicular to the original line. Connect the two lines with another straight line. Now you have a square. From the top corner of your square, draw a line at roughly 45 degrees to the top line of the square..."

Whew. That's already a mouthful, and we haven't even finished the roof yet.

Ed Emberly was an artist who taught children how to draw. If he were to teach a child to draw a house, he would draw something like this (this is my own drawing):

Three slides. The first shows a square. The second shows a triangle on top of the square. The third shows a rectangle added to represent a door. There are mini version of each shape under each step.

Without a single word, you understand how to draw a house in a moment or two.

This is power of visual thinking. It's important in all fields of teaching. In developer advocacy, it can be used to teach audiences with graphs and maps and videos (of course, audio must also be supplied). Visual thinking can also help developer advocates create the constraints they need to be creative. I think this is the less obvious point, so let's talk about it.

Using Visual Thinking to Plan Projects

Let's say I were a developer advocate who was planning a tutorial to introduce developers to a new SDK (Software Development Kit). Say that this SDK provides the developer with a type of hyperaccurate timestamp. To show the purpose of the SDK, I use it in the framework that I used to build this website, docusaurus.io. I'm also really excited about the docusaurus CLI (Command Line Interface), so I'll describe how to use that to view the pages in development. I'll draw a map of what I'm doing.

Three stacked blocks representing steps, only one with SDK mentioned

Hm. That's a lot of real estate devoted to docusaurus. Only one step is devoted to the SDK I'm writing a tutorial about. Maybe this plan could work for a livestream format, but for a tutorial, I'd better narrow my focus:

Two stacked blocks representing steps, one with SDK mentioned

There. I've planned my tutorial and given it a good focus, using visual thinking!

Where To Go From Here

I hope this has given you some good ideas about how to use visual thinking in your own approach to developer advocacy. It's helped me, in that it's given me some 'shortcuts' to project planning and also helped me communicate concepts more clearly to my audiences.

If you're interested in learning more about visual thinking, I highly recommend these resources:

Edward Tufte's Work

The Doodle Revolution, by Sunni Brown

Unflattening, by Nick Sousanis