Code16 was quite an experience - speaking at the conference twice, in two cities! Plus we got to experience the local meetup flavour, with SydCSS+SydJS and MelbJS bookending a huge week.

Sydney's event was at the Maritime Museum.

Hi there Web Directions Code 2016! #code16 #sydney #darlingharbour

A photo posted by Ben Buchanan (@200ok) on

The event had a chilled vibe; with the healthy morning tea proving a not-entirely-surprising culinary hit.

#code16

A photo posted by Daniel Smith (@growthhackerau) on

Melbourne was back at the Arts Centre, just over the bridge from Flinders Street Station.

A photo posted by Ben Buchanan (@200ok) on

Contents

Yes a jump nav is old school, but this is a BIG stonking post...

Disclaimer & Image Credits

First and foremost: since I was speaking, these notes are extra rough.

These notes were hammered out very quickly because doing so seems to help me remember them. However due to the haste, errors occur and you should always assume I'm paraphrasing - if you need an exact quote, please check the session recording later.

Photo credits:

  • Kiki/Buba - By Monochrome version 1 June 2007 by Bendž Vectorized with Inkscape --Qef (talk) 21:21, 23 June 2008 (UTC) - Drawn by Andrew Dunn, 1 October 2004. Originally uploaded to En Wiki - 07:23, 1 October 2004 . . :en:User:Solipsist . . 500×255 (5,545 bytes), CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=19653163
  • Others variously my own; as noted in embeds; or "from somewhere on the interwebs".

Tim Kadlec – Once more with feeling

The human brain is an amazing thing. It processes all kinds of input all at once; and handles crazy pattern recognition tricks like seeing a face in a couple of lines and dots.

Our perception is surprisingly easy to trick.

When Barack Obama's face is upside down we sort of realise something's wrong… but when flipped the right way, we immediately realise the eyes and mouth were upside down.

Then there's kiki and buba... which do you think is which?

We think the sharp one is the “kiki” because it's sort of k-shaped and the rounded one is “buba”.

Our brains infer meaning where there is none; and our brains are fairly easy to trick – particularly with regards to time.

“Who thinks 300ms is a long time?” (not many hands)

An experiment was done with Occulus Rift, where people spent a day dealing with 300ms-delayed visual information… and they could barely function.

Meanwhile Walmart and others have done studies that found 100ms has a serious impact on the performance of web businesses. 100ms makes a surprisingly big difference.

What happens looking up a website?

DNS lookup → TCP connection → SSL negotiation → Request

Chrome has a tool that visualises DNS lookups if you're curious to see it.

There are 9 requests/connections before a secure site starts serving actual assets. It's easy to hit 400-500ms just to get the HTML down to the browser.

“The fastest request is no request.” - every performance advocate ever

Everything should have value, because everything has a cost. How do we balance richness with speed?

Performance used to be entirely considered “the developers' problem”. But that perception is extremely flawed.

Classic performance story: there was a slow lift in an office building. The lifts had to service a lot of floors and it could take forever. The engineers came in and determined there was nothing they could safely do to speed up the lift. So the building manager called a meeting… and a psych major type of person said “why are people complaining anyway? Why do they hate the lift so much?”

Strangers shoved into a boring metal box with nothing to do… so they put mirrors into the lift. People can check their hair; they could have a sneaky people watch at the other people in the lift. Complaints about the lift being slow dropped off.

The point is… actual performance wasn't changed, perceived performance was improved.

If you are waiting for something, there is an unavoidable period of time:

(start)------------------(waiting)------------------(end)

There are two kinds of waiting: active and passive. Passive waiting is the problem – where you are waiting with nothing to do while you wait. People waiting passively over-estimate the wait time by 36%.

Specifications are letting you preload resources so the user can have reduced load times on subsequent actions.

  • link rel="dns-prefetch" – just gets a head start on DNS lookup
  • link rel="preconnect" - this means when you hit the domain, the request fires immediately without DNS/TCP/SSL delay
  • link rel="preload" - feed the browser a specific resource to request ahead of time
  • link rel="prerender" – preload and prerender the whole page as though the user has already loaded it. This probably makes sense for Google results but could easily be problematic for less linear experiences.

These are all hints the browser could choose to ignore. But used carefully they can really improve the experience for users. Used poorly they can slam the user's connection and use lots of CPU.

Looking at the way browsers render content, you quickly realise critical resources block further rendering – eg. CSS and JS in the <head> pause HTML parsing because those resources will create a different DOM/CSSOM.

To combat this, JS can be loaded with async – the JS is parsed in parallel and executed later. There's also defer which pushes execution right to the end… but defer is not really ready for production.

The CSS is a little harder to handle. You have to split up CSS that handles the above-the-fold content; inline the “critical” CSS; then you inject the rest of the CSS with JavaScript later on; and set a cookie so on subsequent views you can just immediately link to the stylesheet which should be cached. This is really hard to do by hand – you really want to automate this.

Theoretically now we don't have any blocking resources now.

How beneficial is the inline CSS hack anyway? It feels so dirty, it's hard to do…

Wired did this as an experiment – the optimised page loaded in 3.9 seconds instead of 12.5 seconds. The user had readable/viewable content after 3.9 seconds.

“Progress bars are the hold music of the internet.” They basically draw attention to the waiting.

An alternative to spinners or progress bars is to use 'skeleton screens', eg. a grey box hint at the layout that's loading. Facebook does this on the activity stream. This gets the user to focus on what they are going to get, instead of focusing on ...nothing.

Facebook skeleton screen

Another approach is caching the application shell on a mobile, so the wrapper loads immediately and all the content comes in later.

Travelocity uses a descriptive “progress bar” which talks about the tasks it's doing behind the scenes (“finding flights… finding shortest routes...”). It's not literally doing them at the time, but research has shown people actually engage differently with the results and don't mind the wait so much.

You need to be careful though… faster is not always better. H&R Block found users didn't trust a calculation that was running too fast. The manual equivalent was really hard and slow to do; so users expected even the computer would be a little slow. They put an artificial delay into the app and the users were happy – they felt the computer was doing “enough work”. Similarly Wells Fargo had to slow down a retinal scan that seemed too fast to be trustworthy.

The name for this is Operational Transparency – showing that work is being done. Reassuring people that what's being done...is being done.

Performance is more like choreography than a simple race to the finish. Performance is about the way the user feels. Perception frames the reality. Shift the focus to the user, not to the technology.

@tkadlec

Once More With Feeling - slides on Speakerdeck

Rachel Andrew – CSS Grid Layout

Note: see link at the end for code examples.

Rachel has been working on the web since the arguments were should we use CSS at all?

Modern CSS is amazing in many respects, but CSS for layout seems to get itself stuck around 2006. Floats, inline-block, display:table… The things we use for layout were not designed for complex web applications.

Our great hopes: Flexbox (w3c:css-flexbox), CSS Grid (w3c:css-grid), Box Alignment (w3c:css-align). Rachel will focus on the Grid.

First, define your grid. Then you define tracks – the rows and columns. Then you set up the gap between these tracks.

Example: three fraction-unit tracks to create three equal rows and columns (grid of nine).

You can mix fractional and absolute units, so you can have a 500px column followed by fractional grids to divide the remaining space. There is a repeat() shorthand:

repeat(3, 1fr) /* three one-fraction elements */

You can use auto-fill, auto-fit and minmax() to create extremely flexible grids that always fill the space but adapt when things will no longer fit.

Where grid tracks are the spaces, grid lines are predictably for drawing lines.

Grid cells are the smallest unit in the grid – a single cell or element in the grid. Grid areas are groups of cells.

So now you've set up your grid, you need to place content into the grid.

This is usually done by using grid-column and grid-row shorthand; or grid-area which runs through grid-row-start, grid-column-start, grid-row-end, grid-column-end – giving you a one-line syntax in your CSS rules.

You can name the lines in the grid and then position against the names of the line.

Grid-template-areas are “the ascii art way” to define the layout.

For those of us who used to deal with Netscape 4 are probably thinking… grid is very table-like. In many ways it is; but unlike tables the content doesn't need to follow source order and they can be moved at different breakpoints to create responsive design.

When you move into complex grids, naming lines becomes important. You can position against the start of content and not the start of a specific display location.

Named areas go even further – you can name areas like “header”, “content”, “sidebar” and “footer”. The ascii art style lets you do things like:

"header  header"
"sidebar content"
"footer  footer"

...to get a header and footer that span two columns; and a sidebar and content area that sit next to each other. Really easy to understand and great for rapid prototyping. You can use extra whitespace to line things up neatly in your code; and you can use . to specify an empty area.

"header  header"
"sidebar content"
".       footer"

Can you replicate the classic 12 or 16 column layout, popularised by libraries like Bootstrap? There is a big cost to using those systems as it pushes weight and complexity into your markup. CSS Grid lets you describe complex layouts in the CSS and redefine at different breakpoints. It's a lighter way to do it.

Example: replicating Skeleton Grid with CSS Grid instead of a class scheme. While there is still complexity, you still need to do the work of specifying your complex grid, it doesn't require anywhere near as much markup. Naturally you can replicate the older grid systems; but you can do more. You can span rows as well as columns

You can do fast prototypes and try different things by changing CSS, rather than having to change markup across n instances in the markup.

The grid item placement algorithm determines the rules of content placement. It tries to ensure no matter what you do, nothing will overlap. Some of the layouts can be a bit odd if you've made a mistake or only defined a partial grid – but the content should not overlay or run together.

Grid auto flow set to “dense” creates really smart layouts, assuming the order can be manipulated out of DOM order. Powerful but a bit scary…!

FAQ: is Grid a competing specification to Flexbox? Is there some kind of W3C Working Group death match coming up? Not particularly – while they share many similarities these are contained in CSS modules. The css-align module us used in both Grid and Flexbox. This leaves the other modules to solve the different use cases they are best suited to.

When should you use Grid over Flexbox? Flexbox is for 1 dimensional layout (a column OR a row); Grid is for 2 dimensional layouts (columns AND rows). Grid is much better at aligning content against the position of other content. Flexbox works from the content out (sizing and positioning driven by content size); Grid defines a grid and lets content sit inside (sizing and positioning driven by grid design).

Power and responsibility: accessibility must be carefully considered – do not think Grid is an excuse to forget about logical source order!

Nested grids? Yes! A grid item can be a grid for child elements. There is a subgrid value although it is at risk of being removed from the spec. Eric Meyer has a great piece advocating for subgrids.

This is an emerging specification. Now is the time to experiment with it, try it out and give feedback. It's now that you can influence the version of the spec you'll be using in production in the future. If you wait until it ships it is too late to complain! Spec issues are now handled on github.

Browser support? Currently it's behind feature flags and in early dev versions; but not in any mainstream browser.

All the code: rachelandrew.co.uk/presentations/css-grid

gridbyexample.com / rachelandrew.co.uk / grabaperch.com

@rachelandrew

Stephanie Rewis – Pragmatic Flexbox

When Stephanie set out to build the Salesforce UI library, the team had plenty of discussions about the layout system they'd use. The problem with layout is we've been using all those methods that were never really made for it.

Don't forget when people tried to use Dreamweaver to magically create layouts… that were using position:absolute and the moment you wanted a footer you were screwed. We had some hope with display:inline-block but it had its own issues. Then we had display:table which was the best thing we could do with the browsers we had.

Now we have display:flex! What's not to love? But the immediate question is can I use flexbox? In production? Now? In all browsers I really have to support?

You can if you are supporting IE10+ you are in good shape. Use a tool like autoprefixer to avoid the need to manually prefix things. There are some great fallbacks; look up “almost complete guide to flexbox without flexbox”. There are polyfills, although do consider just not supporting old versions of IE! Flexbox can be a progressive enhancement. Or you can wire up an older layout system behind modernizr.

The key pieces to flexbox: flexbox containers and flexbox items.

Container properties: flex/inline-flex; flex-direction (set the main axis to be column or row); flex-wrap; flex-row; align-content;

You need to be aware of the flex axes – main axis; and cross axis. Both axes have a start and end which are important for alignment. These are not horizontal and vertical – main axis is set by flex-direction and cross axis is the opposite/crossing axis. Using flex-start and flex-end is great for internationalisation if you support bot LTR and RTL languages.

Flexbox items: shorthand “flex” combines flex-grow, flex-shrink and flex-basis. You can reorder elements but remember to be judicious and keep accessibility in mind. Basically you shouldn't mess with the order, Firefox in particular is problematic as it follows CSS order (which is a bug but that's the reality).

What can we do with flexbox?

  • Display flex: equal height boxes
    • The default layout is great for headers where the heading in the middle needs to fill between the logo on the left and the controls on the right. But it's not so great for body content.
    • To control the widths you can set a min width… then you can use justify-content:space-around or space-between to ensure there isn't an awkward lopsided gap at the end, it's evenly spaced out.
  • You can align things to the top or bottom
  • You can choose if things stretch to fill vertically
  • You can centre-align content. Vertically centred. VERTICALLY CENTRED!
  • Classic OOCSS items like the media object are really easy and the DOM order ends up being cleaner and better for accessibility.
  • Horizontal lists are incredibly easy
    • pagination can be set up to always space correctly no matter how many items there are
    • you can use margin auto to bump the first and last item out and away from the rest, which will still evenly space
    • horizontal navigation can be set up with nice even items
  • You can make a box of content always centred – vertically and horizontally – no matter how much content they have, just with display:flex on the parent and margin:auto on the child.

Grids… a flexbox grid starts with:

.grid { display:flex }
.item { flex: 1 1 auto; }

Then they add modifiers to reverse rows, create columns and so on. You can control whether items will wrap.

There were many debates around breakpoints and how to handle responsive design. The team makes great use of Cap Watkins' sliding scale to resolve arguments “how many figs do you give” about the topic at hand. Good one to look up!

(links in the slides!)

@stefsull

Greg Rewis – Does your web app speak schadenfreude?

Schadenfreude – taking pleasure from someone else's misfortune.

What does schadenfreude have to do with a web site? Internationalisation and localisation! When we get this wrong, we inflict (accidental) schadenfreude on people who use our sites and applications.

FastCompany/Common Sense Advisory: to have a truly global site, it must “speak” more than 16 languages.

Think globally, code locally. It is our responsibility to understand how the design and code decisions can affect our ability to deliver sites to a wide audience.

Greg works for Salesforce – they translate into 34 languages, with 15% of their users are non-native-English speakers. So it's a big deal.

Internationalisation (i18n) VS localisation (l10n) – what's the difference?

i18n is designing for easy localisation. Localisation is the actual process of adapting a product to meet the language and cultural requirements of a local market. Including...

  • Dates
  • Numbers
  • Currency
  • Phone numbers
  • Symbols/icons
  • and more!

It's not just translation strings.

Phone numbers are written 555-1234 in the United States, so people from the US will write the dashes when they fill out the form… and they won't notice they've done it because it's second nature. Greg had trouble submitting a phone number because it couldn't handle dashes between the numbers. Greg's epic rant ensues: stop blanking out form fields when I make a mistake!!!

Words taken out of context are very hard to translate. “Device” in context might mean the phone you are using; but the French translation called it “peripherique” - which means a peripheral device like a printer, not a primary device like the phone.When the translator got the word “Device” in their translation list they didn't know the context, so they had to make an educated guess… but they went with the wrong option.

Even in context, words may not mean what we expect! (classic joke about “what british people say vs. what british people mean...”)

Even when translated correctly the words may be far bigger. “Edit” is “bearbeiten” in German. If you do all your design in English, it is probably going to break in French and German! There's a rule of thumb to allow for 20% text expansion… but that rule is wrong. In languages like Finnish, German and Dutch, single large compound 'words' replace strings of small words in English. There's nowhere for that word to wrap or break nicely.

Example: the old “views” button on Flickr was three times longer in French and it was both longer and two words with a break in German.

You may run out of room entirely – a set of buttons may simply not fit where you need it to fit. You have to come up with a whole new approach.

The W3C came up with real expansion ratios for i10n – up to 10 words needs 200-300% expansion. Over 70 words comes down to 130%.

It doesn't end there…

Symbols can be an issue. A button with a plus (+) icon grew in Korean due to the character set being different.

Thai languages have much taller ascenders than English. The line height doesn't work.

This is why bad i18n/i10n leads to schadenfreude – things become hard to read, hard to use for someone else.

Oh and 16px? Don't use that. Our roman alphabet fits in 16px; other languages (particularly asian languages with larger symbols) need larger text and taller line heights. Don't hard-code heights on anything, translations will quickly lead to overlaid text.

Tabs. Tabs are a challenge. Fixed width tab navigation tends to truncate the meaningful information. Same with breadcrumb navigation. There are words you really don't want to truncate!

Salesforce have a table of elements that can wrap and/or truncate. eg. buttons cannot wrap or truncate, form labels can wrap but not truncate, textboxes can truncate.

Don't force capitalisation – not all languages capitalise things the same way. In Danish, nothing but the first letter is capitalised. Ever.

Semantics go wrong as well – the way you emphasise text changes between languages. You can't always emphasise a specific word, it may need to be a set of glyphs; a region of text.

Use the lang attribute. Put it on HTML, use it inline when required.

Then there's also translate="no" to ensure automatic translations don't override it – for example in Greg's presentation he used “schadenfreude” on purpose and it would change the document to translate it. But be careful not to accidentally block content that should be translated – if you put a lang attribute on an A element with a title attribute, the title gets translated. Which is fine if you intended that, but not good if you didn't.

You may also be caught out by cultural differences in structure and order – setting “from” and “to” dates doesn't work as well as “begin” and “end” which are more universally translatable.

Encoding. Use the right encoding. Don't skip diacritics and other modified characters – anos/años mean very very different things in Spanish.

Salesforce are now using a fake-language plugin for their tools (including Sketch) to help identify untranslated strings, unencoded symbols and other issues, from the design phase onwards.

The culture you grow up in plays a huge part in the way you that you perceive things; and how you then build things. Culture is like an iceberg – a small tip of things we notice and think about, then a whole lot of stuff under the water that we don't think about.

Things as fundamental as the way people read are totally different in different cultures. In English, eye tracking will show an F-shaped heatmap; while Japanese, Chinese and Korean audiences have a very different pattern (eg. a Chinese audience produced lots of smaller hotspots, much more broadly distributed than the F-shape).

Example: runs through the McDonald's website as it appears around the world (varies hugely).

Contextual design: established by Edward Hall in the 70s. People in different cultures communicate differently – some are high-context, some are low-context. Asian cultures are more high-context – this makes pictures and interactivity more accepted as a form of communication.

Colours change significance – in China, red is good; in most English speaking cultures, red is bad.

How do you indicate the availability of different languages? Flags are bad – flags indicate a country and multiple countries speak the same language. There is a push for a standard icon (languageicon.org).

Symbols – $ used to indicate “currency” in a general sense is confusing if you don't use dollars. So if you use $ to indicate the concept “invoice” someone might think they're being charged in dollars.

Images – a minefield. Ikea had to remove a woman from an image because she was wearing pyjamas, which was offensive in Saudi Arabia. WhatsApp had a version of their site where the text content was translated, but the screenshot of the phone was always in English.

To avoid all this, i18n and l10n has to be involved and considered from the very beginning.

l10nchecklist.com for a really useful website. It has a huge list of things you should be thinking about in your projects.

@garazi

Katie McLaughlin – Unicode

“I love emoji! ...I love how broken they are!”

TLDR: because computers, we need unicode.

If you use unicode, most languages will be fine… but Japanese has multiple character sets, including emoji!

Emoji came from a closed mobile network in Japan, which had 16x16 pixel characters which were pictures. They were not images, they were not emoticons, they were something else we now call emoji.

The unicode consortium initially refused to include emoji in the Japanese character set. But people kept asking and the basic set of emoji eventually went into unicode… but nobody really notices.

But then iOS5 included emoji and snuck it into English character sets… and it exploded.

Now emoji are expanding to include non-japanese cultural references like popcorn and burritos.

However emoji don't look the same everywhere. Mistakes where made. Things were rushed.

  • The yellow heart was translated in unicode with speckling, which is an old way to encode gold in black-and-white (used in old heraldry systems). The first android emoji for “yellow heart” was the infamous translated from speckled into a hairy pink heart.
  • Flushed face emoji look so different on different platforms that they might look flushed, embarrassed or even like they're doing an “aww shucks”.
  • Some emoji for “clapping” showed hands with thumbs pointed out – palms not facing each other. Whut?
  • The emoji for “blonde” isn't always blonde.
  • Question mark is sometimes an exclamation mark.
  • Grinning and Grimace – most people are confused by this. The eyes change to indicate extreme happiness vs extreme sadness. (there's a whole paper on this from the University of Minnesota)

How do you get a new emoji/glyph in now? The unicode consortium will take applications on grounds including:

  • For backwards compatibility – eg. the Yahoo Messenger cowboy
  • To complete a set – one implementation didn't have the entire zodiac set
  • Cultural pressure – asking often enough gets you bacon and tacos

Brands, memes, etc will not get in. Things that are too specific won't get in - “cocktail” yes but not specific cocktails.

The broader use of unicode characters is creating some challenges. Emoji will break certain features of javascript; URLs didn't/don't work (there's a text fallback).

(spoon emoji).ws is a real website. RFC3492 is proposing a way to encode emoji into a standard alphanumeric string, because URLs have to work somehow: http://xn--9q9h.ws (via charset.org/punycode.php)

You can't use certain javascript string operations on emoji, so test carefully!

Emoji input can be hard – people have tried to standardise, but Slack and Hipchat do different glyphs for :cake: and (cake). Neither is an emoji. Autocomplete can go wrong as well – :$ should not become “money mouth” emoji.

If you are using emoji, be careful –

  • include fallback text (use title, alt and aria correctly)
  • use the formal names
  • stick to the standards
  • remember to actually delcare meta charset utf-8 – otherwise you'll be getting ascii

The future: we are getting more emoji! We're getting bacon! We're getting egg...noting chicken was already in emoji so now you know which came first. More emoji are coming. You'll need to keep up with implementations.

Windows is adding groups – type in multiple characters and you can get combination glyphs like “ninja cat”… because cats? There are proposals for things like girl+chef to produce a female chef glyph.

Take away – we need to implement responsibly, but don't forget to have a little fun as well.

Slides via glasnt.com/slides

@glasnt

Marcos Caceres – Progressively Approaching Service Workers

Reference: The Extensible Web Manifesto #extendthewebforward

The way standards evolve can be strange - “HTML5 is basically reverse engineering IE6...”

There are lots of APIs that aren't really that great. The standards community started thinking… we kinda suck at APIs. Particularly after the appcache debacle, people decided to focus on the primitives and stop trying to be “clever”.

The approach behind service workers is to work out primitives – simple pieces that can be combined into more complex things.

The architecture of service worker: “it's just a glorified event handler!” Little services that run up, do something and then disappear quickly. It has a short life cycle and publishes events to reflect the life cycle, eg activate when it starts up.

“one weird trick”… most things in service workers are available in the Window object! eg. window.caches

The building blocks:

  • Fetch API
  • Cache API
  • Notifications
  • Push API
  • ...etc

These pieces are all intentionally simple; and they are not necessarily easy to use, to ensure they stay simple without being dumbed down.

These are things you should be trying out – eg. if you need to send notifications to users… you should try using the Notifications API! It lets you send text but also cool things like vibration patterns.

The dev tools are pretty good in Chrome and Firefox – they let you inspect these objects; send fake notifications; etc.

The hard part goes to authors to come up with good use cases; and to make good architectures. The specs aren't going to try to be clever, because they don't want to “appcache it” again.

Love the specs...but you probably don't want to read the specs.

Read wicg.io or MDN!

@marcosc

Fiona Chan – CSS code smell sanitation

Most of us have a love/hate relationship with CSS. We love it because it's pretty easy to learn and it's pretty powerful. But CSS can get messy very quickly; and to some people CSS is kind of dark magic – there are so many ways to do any one thing, which one is “right”? Which one do you pull up on a code review?

There are things you can do to minimise the problems.

CSS Linting – checks for basic syntax issues, but can enforce code style as well (naming conventions), etc.

  • You do want to inspect and modify the default rules to suit your project and team requirements. Running totally default rules probably won't go over well.
  • You can run linter...
    • in a build (ideally before merge)
    • at author time in editors like Sublime (“sublime linter” package)
    • or both!

So linting is good, but you should not rely solely on linting. Generally speaking you should never rely entirely on tooling.

So what else can you do?

  • Simple measures can be surprisingly effective – very long selectors, very long rules… is it all necessary? If it looks big and complex, it probably is. If a selector seems excessively long, it probably doesn't need to be that long.
  • Avoid magic numbers – numbers that “just work” but aren't explained. These are brittle. Wherever possible, use a non-magic-number implementation. If you do have to use a magic number, comment with details of how it's derived. Flexbox avoids a lot of old cases that required hard-coded numbers, so it's worth investigating a move to flexbox.
  • Stress test your UI – don't just write and test the perfect scenario. Change the font size. Try different screen sizes, devices, etc.
  • Watch out for values being set then unset/reset later on. That's a warning sign that the base/default settings are wrong. Keep base components as simple as possible and put more styles into the modifiers.
  • Old prefixes lingering on – do you still have code you no longer need, because your browser support has changed? Even if you use autoprefixer, have you updated your supported browser configuration recently?
  • Review the build in the browser – don't just read the code, check it out and look at it. Inspect things, kick the tyres. Use the inspector to turn off properties to see if there is any unnecessary code hanging around.
  • Consider maintainability – are things named clearly? Is the code organised?

Don't just rely on linting – code review properly, check out the new work and try it in a real browser.

@mobywhale

Slides: CSS Code Smell Sanitation by Fiona Chan

Dmitry Baranovskiy – Zen of JavaScript

(No writeup can truly do justice to Dmitry's delivery, this is just an attempt to give an impression of the moment!)

Dmitry is not about to tell us that everything you know about JavaScript is wrong… although it is…

Dmitry remembers when he first met JavaScript nobody called it JavaScript. When he was studying at university, he was told “you can use any language you want” he chose JavaScript. It was 1997. People were laughing because it was “not serious”.

The problem with JavaScript is everybody knows it, nobody knows it. People don't pick up JavaScript so much as run into it.

JavaScript has long been considered a bit of a joke. People have bent JavaScript to meet what they expect – you can tell JavaScript written by a Ruby guy or a Java guy. Even JavaScript developers layer on Angular and React and that changes the way you write.

But what is the JavaScript way of writing JavaScript?

There is no wrong way. JavaScript was written so that any way is the right way. “You can still write shitty software, don't worry...”

Those who are unaware they are walking in darkness will never seek the light. - Bruce Lee

Zen has the well-known conundrum of 'what is the sound of one hand clapping'… in JS the question is probably 'what number is not a number'. We are afraid of NaN!

NaN.

What if NaN is there for a reason? What if we used it? If we have a function that returns a number but returns null if there's an error… why not return NaN instead of null?

People are afraid of NaN because the first time you get NaN it's because you fucked up. So from then on you think if you see NaN you did something wrong.

Accept NaN. It's there. It exists. You can use it.

Dmitry used NaN in real code twice last week… he's not saying his colleagues appreciated it.

Equality.

Is JavaScript good or bad at equality? Depends who you ask and how you look at it.

== is hated… why so much hate? Dmitry's current team has a rule you can not use ==. There are reasons for it. There are unexpected and confusing results when you use ==. So we use ===.

=== is a true friend, always telling you what you want to hear.

But === is not fixing the problem, it's covering the problem.

What about <= >= < > …? Not using these is like hiding by covering your face. If you don't understand what you're doing, you are in trouble.

People look at JavaScript and come up with theories how it works, then they code according to those theories. Don't make up stupid theories – read the spec. There's no mystery. It's documented.

Acceptance.

JavaScript is very accepting. It accepts all your hate, all your blame. All your love.

.charAt() doesn't care what you pass in to .charAt(). Be like .charAt(). It always returns something. Exceptions? Why throw exceptions? Why be so rude?

It doesn't care what it's running on either. Pass it a number it returns a number. Pass it NaN it returns a character from NaN.

NaN.charAt() → “N”

Emptiness.

Emptiness is very important in Zen. JavaScript has three emptinesses: NaN null undefined

Undefined isn't even null. It's more empty than null. But JavaScript has something even more empty than that… JavaScript can have a placeholder that returns undefined. But if there's no placeholder it's not even undefined.

There is nothing new.

The “new” keyword doesn't smell like JavaScript. It was put in to smell like Java, which was fashionable at the time. As Java stopped being fashionable, people started leaving it out.

RegExp() and new RegExp() → the result is the same. Function() and new Function → the result is the same

But then you have things like:

Date() is not equal to new Date(). Same for Number(), Set(), Symbol()

Date() is an alien function, not from JavaScript, it was brought in from Java.

JavaScript and exceptions are things from opposite universes.

42/0 in JavaScript doesn't throw an exception, it returns Infinity.

The five faces of function.

You can use Function five ways. You can call a function, you can return a function, you create contexts and closures with functions, you can use function as a constructor, you can call it with strings or numbers.

Forgiveness.

People write rubbish. JavaScript forgives you.

All your bad code. All your libraries. You want to run JavaScript on servers, on phones. JavaScript forgives you.

You want to put commas first, you want to omit semicolons. JavaScript forgives you. Dmitry doesn't. But JavaScript does.

Is it a good thing, or a bad thing? Dmitry doesn't know. But it's how the language is written.

JavaScript is not a weak language, it's a flexible language. It's an encouraging language. You don't have to follow the rules of Python or Java to write JavaScript, but you can. You can follow your own rules if you want to. It gives you that freedom.

And those who were seen dancing were thought to be insane by those who could not hear the music. - Nietsche

Be the ones who hear the music.

@dmitrybaranovsk

Alicia Sedlock – Frontend testing

Yes, Alicia really has a pet hedghog! This is a thing!

Finance in the states is kinda complex… but Society of Grownups has a legal obligation to ensure the advice they give is sound (legally and morally). There is a high risk of breaching and losing trust with customers.

As devs we don't want to give users broken products; nor break trust with QA and the business; nor waste development time.

“Testing's really fun to talk about it because...noboyd's really passionate about it...”

What is FE testing anyway? A collection of techniques to hold developers accountable for writing and maintaining functioning and usable code bases.

While frontend has been doing some good work in testing, with the push to client-side apps the responsibility has increased and the commitment to testing hasn't necessarily kept up.

With testing there tends to be a focus on the outcome more than the input.

#code16

A photo posted by Daniel Smith (@growthhackerau) on

Automation is your friend! While you don't have to automate tests, it really really helps. You are more likely to keep up with testing when it's running automatically.

The components of frontend testing:

  • The old school...
    • unit
    • integration
    • acceptance
  • The new school...
    • visual regression
    • performance
    • accessibility

The newer school tools are coming in because frontend testing had different needs than traditional backend testing.

Unit

  • there to ensure the smallest testable pieces (units) work as expected
  • Anatomy of a unit test (in Jasmine):
    • logical grouping of tests
    • individual test description – needs to be genuinely descriptive
    • calculation/run code
    • expect/assert the result is correct
    • suite has setup and teardown – functions that run before and after tests, for organisation and cleanup
  • Spies –
    • this is where you don't know the specific result but you want to know what's been happening.
    • eg. spyOn(Functionname, “value”) → expect..toHaveBeenCalled will show that Functionname was called with “value” even if you don't know the actual value of “value”.

Integration

  • Checking that the units play well together in the bigger picture.
  • Often takes or inspects multiple inputs, rather than just one
  • Brings in more setup and result checking, including DOM inspection to see that the correct state has been reflected (progress classes, etc)

Acceptance

  • This is to ensure key tasks in the product work – ie. checking that whole flows are working rather than deeply inspecting each step.
  • Commonly done with Selenium; can be done in other frameworks eg. jasmine-integration, Karma, Nightwatch

Visual Regression

  • Checking for inconsistencies in the view (rendering)
  • While atomic design and style guides have made this more stable, problems still occur.
  • In literal terms: the tool takes a screenshot before and after changes; then diffs the screenshots.
  • Visual regression testing looks a lot like acceptance testing, with added lines to screenshot things.
  • You can screenshot the whole page; but that will trigger failures everywhere if you change a repeated component. Screenshotting components (parts of the page) reduces false failures.
  • Visual regression tests can help keep track of responsive design, particularly the odd sizes that people tend to forget to check.
  • Big challenge with this – getting the workflow right. These tests are subjective; also you may have intended the change so a “failure” isn't going to feel right.
  • Tools: casper, phantomcss/phantomflow (includes control over the level of detail that will trigger failures), Percy.io (new tool with a good approval workflow), wraith, webdriver.io, BackstopJS

Accessibility

  • This is a big concern, but devs like to break it…
  • Accessibility testing is fundamentally about testing your site against accessibility standards.
  • There is some subjectivity; but overall the automated tools will give you a great baseline.
  • This does quickly require collaboration – eg. designers need to learn how to
  • Tools: ally (grunt-ally), Pally, react-ally

Performance

  • These tests aim to keep your project honest about performance.
  • This is not a really strong area in the industry – there's a lot of debate and it's tempting to fudge the numbers.
  • Tools: grunt-perfbudget, gulp size, perf.js

Bonus – Monkey testing

  • Based on the old adage that 100 monkeys on 100 typewriters will eventually produce the works of Shakespeare.
  • The idea is to point tools at your site that hammers on it randomly and fills things out with ridiculous things…
  • This is chaotic and crazy but it will definitely stress test your app!
  • Tools: gremlins.js (awesome naming… gremlins.createHorde() horde.release())

Other stuff?

What about linting? Alicia doesn't really see linting as “testing”. Testing is about outcomes, while linting is about input. It's useful and recommended but it's not really part of testing.

More points...

So should we use all of these tools? No… you might not need every type of testing. If you don't have a whole lot of javascript, you might not need unit tests.

Do an assessment – which areas of the code base are most important, most risky, most fragile?

You can definitely go overboard with tests. Tests should not be added for the sake of it, or because you can. Tests cost creation, maintenance and build time. If builds get too slow people may start to avoid running them at all. Writing 100% coverage tests generally isn't great, it's just too much.

Perhaps you can do more than one type of testing inside one tool to keep overheads lower.

If you're not sure where to start, pick the one with the most bang for your buck.

Also beware of writing tons of tests too early in the process. You can waste a lot of time.

Yes… writing good tests takes practice.

Tests should always fail at some point (you should always see it fail before you trust it!).

The excuse corner:

  • it is more code to write – you need to balance this with other work
  • your code will still have bugs – but at least not the same ones over and over
  • legacy code is hard to test – start by adding a few unit tests. It's ok to start small!

Then you can get the good old “opinionated devs” problem. It can be hard to get people to adopt new practices. Build a test-conscious team through education, experimentation and practice. You will need to be persistent to guide people through and make this happen.

MOAR!!!?

  • Great book - “Frontend Architecture for Design Systems” by Micah Godbolt
  • FrontEndTesting.com (includes Slack channels that are slowly growing)

https://speakerdeck.com/aliciasedlock/the-state-of-front-end-testing

@aliciability

Also: Society of Grownups, Girl Develop It teacher, Hedghogs & Reflogs

Q – given a choice, should FE devs do acceptance tests in familiar tools or would you recommend they just learn to write and submit selenium tests? Or perhaps the way to ask is should there be a line between dev and QA?

They should be having that conversation – the two teams will have different focus, different approaches that are complementary. But they should know what the other is up to.

Q – lots of these tools seem to be flaky in some browsers… what's the magic combo?

A lot of it's just still flaky… we need to use them more and demand more from vendors.

Q – how do you avoid smashing your production site?

Mocking libraries are great so you don't hammer your real APIs; and you should definitely let your security team know what kind of traffic to be expecting.

Yoav Weiss – Taking back control over third party content

The reason Yoav wants to talk about third party content is because our ecosystem is broken. People talk about all kinds of tech – responsive, performance, testing… – and you take that to heart and implement things. But then the business has a requirement to add some code, which brings some more code… which eventually relieves itself all over your lovely performant code.

Who is to blame? Well all that bad code comes from people who are trying to make money, so people can eat and live indoors and nice things like that. Then some other people are trying to track what was happening with users so the experience can be improved. Each piece does have reasons.

1. The Problem

A year ago the Verge put up an article “The mobile web sucks”, saying the mobile browsers were not trying hard enough. So someone analysed the Verge's own website and discovered it weighed more than 12megs. The problem here is the Verge has performance engineers… but they don't control a whole lot of that code.

Other research suggests ad content is causing most of the mobile web's downloads… and ads are just getting worse.

Ad blocker usage has been increasing in response. Opera now ships a natively included ad blocker; and most other browsers have a range of ad blocker extensions.

Then there are the walled gardens like Apple News and Facebook Instant Articles which are trying to help users by forcing stripped-down versions of content. Google launched AMP (accelerated mobile pages) to try to do similar things; although it's still not the fastest way to build a website, it does provide a level of guaranteed performance (it will be fast enough). Plus AMP pages get a boost in mobile search results. These alternative formats are basically forking the web – you have to publish in HTML and various other formats which are not HTML.

Advertisers started to get the message that ad blockers were going to break their business. They did two things: first, asked their advertisers to make their ads more performance (LEAN by IAB); second, asked publishers to try to 'educate' users about the impact of ad blockers (DEAL by IAB) which really just means making users feel bad.

So we have ad blocker blockers.

The web responds with ad blocker blocker blockers.

So we have an arms race… and like most arms races, the people who benefit are the ones selling ammunition.

2. Workarounds

Async loading – if you are including a script tag, ensure it is set to async. We've known this for a long time and most third parties are providing async snippets; but check. Async is a good first step, but it is not the whole solution – you still have arbitrary code running with full access to everything, including bandwidth.

Preconnect and preload:

<link rel="preconnect">
<link rel="preload">

This lets you feed content to the user to avoid slowdowns. These are awesome! Full disclaimer, Yoav worked on them...

They are good but you can't apply them to all third party resources, because they are often downloading dynamic content.

Service workers can also mitigate some negative effects.

Content-Security-Policy (CSP) lets you send policies to the browser and enforce them, eg. restrict which hosts the browser can connect to. It has some limitations with frames and because it's security-oriented it's not the panacea for all things third party.

Iframes – this is a powerful way to isolate third party code. Third party code can go to town inside iframes without giving access to the main document's DOM, which is nice. They can be further restrained with sandbox - <iframe sandbox> - which can prevent scripts running, prevent alert(), form submission and so on. However this also often means third party code doesn't dow what it needs to do. You can allow things in a granular way <iframe sandbox=”allow-scripts”>.

Not all third parties support being iframed, because they need to access the DOM. Ads often have visibility constraints (they need to check they are actually visible, so they can actually charge the advertisers), analytics providers often need to track user activity to create their reports.

The IAB came up with SafeFrame – a way to enable ads to be iframed, but still give ads the information they need to work. It's a script that handles communication to the third party code in iframes without giving full access to the DOM.

All of these things, while necessary, are not sufficient. eg. Iframes can consume enough resources to cause jank in the main frame.

There are things the third party providers can do to be better web citizens.

  • use passive touch events, which are much faster than the default active events… document.addEventListener('touchstart', handler, {passive:true});
  • Intersection observers – working to limit the events being fired to the content that's actually being used/visible at the time. Basically it removes polling and gives notifications.

3. The Plan™

How can we align everyone's actions to the user's needs?

The first attempt at civil disobedience against the ad industry was do-not-track. The idea was to allow users to opt out of tracking. It turns out if you ask your users explicitly “do you want to be tracked”, they generally say no. The ad industry suddenly didn't support do-not-track and it's a toothless header now.

Content-Security-Policy can be used here, eg. the report-only mode that lets you test policies to see if they would break your site (without actually breaking it).

That Other Team™ - in many cases performance teams act as police teams, enforcing things on other teams which are less interested. The tooling is designed for builds and monitoring, but not so much in the browser yelling at devs when they are breaking things.

W3C: Content Performance Policy – Yoav has proposed a standard, to provide a way for both content owners and third parties to declare to the browser that they are performant, that they're not doing things that destroy performance. CPP was inspired by the desire for a standard alternative to Google's AMP.

W3C: Feature Policy – includes sync-script, sync-xhr, docwrite. Potential syntax for the headers… Feature-Policy: {“disable”:[“sync-xhr”,”docwrite”]}

W3C: Resource size limits – we want to be able to limit the amount of bytes that third parties are downloading, as part of tackling that user click fear. That is, people worrying that following links will use too much data and cost them money.

W3C: CPU and bandwidth priority – there is no current way to tell the browser which content to prioritise. Current proposal uses a “cgroup” attribute, while the syntax may change the idea is a way to mark some content and define CPU and bandwidth share and priority settings.

User experience – ads can be intrusive and confusing, but it's very hard to enforce anything around this in the browser. The definition of “annoying” is fuzzy and hard to detect.

Privacy?! - none of this addresses the privacy issues in the ecosystem. This is mainly because tracking is done on the server, so it's hard to solve from the browser side.

What does all this enable? What change do we hope for?

  • Control for site owners (so they can give users a good experience but also get paid)
  • Smarter embedders (better third party content, make it possible to be good citizens)
  • Smarter ad blockers (eg. take ad performance into account, not just domains)
  • Happier users!

@yoavweiss

Josh Duck – Designing web apps for performance

Josh works on JS performance at Facebook. It's a big challenge with such a big application, with so many moving parts; but also without limiting what engineers can do.

Talking about more than HTML5, JS and the browser… looking at the whole web architecture.

Our phones are basically supercomputeres – more powerful than Deep Blue that beat chess master Kasparov. So why not do everything on the device? Being connected is more important than raw power. Without connection our phones feel dead.

Why the web has worked so well for so long?

I found it frustrating that in those days, there was different information on different computers, but you had to log on to different computers to get at it. Also, sometimes you had to learn a different program on each computer. So finding out how things worked was really difficult. Often it was just easier to go and ask people when they were having coffee.

- Tim Berners-Lee (Answers For Young People)

The web is more than the browser. Without the server we'd be doing some fairly heavy and inefficient things to get information. Servers and URIs let us look up just the little bit we need at the time.

We moved on from static file serving, to on-server databases and full applications. The server had to learn to respond to user interaction. The tools people had at the time (like Visual Basic) just didn't work for the web.

We re-learned how to design applications, so we could design apps for servers. The web became huge on the back of the server-side rendering model.

Now we're in a new transformation, from server-side to client-side rendering. This avoids latency for certain interactions.

The language of choice is JavaScript, which is weird in a way as it had such blunt beginnings. But we've learned a new way to design applications so they make sense in the browser.

But by moving all the logic to the client we've created a new problem: load time. Plus we still have all the old problems like janky experiences.

We're still working out how to make all this stuff fast.

Some people just give up and say JavaScript is too slow. But Facebook were serving React code in their native app and the performance there was fine.

Native apps have set some terrible examples though – apps download tens of megs of code and we can't do that on the web as people won't put up with the initial load. Plus if you cache the code, you have cache invalidation problems and generally blow up the value of caching in the first place. Should you prefer a warm cache or regular updates?

Universal javascript (aka isomorphic rendering) is another approach being explored on the web. Initial pre-rendering on the server does help some things like SEO, but for Facebook it was masking problems rather than solving problems. Things weren't fast, they just looked fast sometimes. Things would render fast but the JS wouldn't arrive before users were trying to click the Like button.

So how can we fix startup time?

The client-side JS world has broken the problem the web solved – you no longer download just the bit you need, you have to download everything first.

Facebook looked at the strengths of the client and server. The server's great at controlling the cache; the client's great at handling interaction. Plus we can have offline services in the browser. But the client will never be good at data fetching, code loading and SEO.

There's not a single way to fix everything.

We need to handle download, parse, compile and execution time. Lots of performance issues are focused on download.

Don't just make things fast – try not to do them at all. The fastest resource is the one you don't have to download.

We need to introduce boundaries into the application where it makes sense; and this is where routing libraries come in. Facebook wrote their own (matchRoute). This is combined with a build system that creates bundles for each route. All up this means whole chunks of the app will only load on demand when the user needs them.

GraphQL also comes into play here as it's a more flexible way to query data. You can design the query on the client, then execute it on the server. This reduces the number of round trips for data.

Facebook does use all these techniques!

Relay – library that works with React and GraphQL to give people performance gains without massive amounts of work.

The web is about the client and the server. Don't just think about HTML and JavaScript. Think about the fact the platform has two different and powerful pieces: the client and the server make the web an awesome platform.

Slides - shows as private but should load

@joshduck

Rob Howard – The things you can't do

There are things you can't do in JavaScript!

Simple things like for can do much more than we usually need – and they're error prone because they are so powerful. So instead what about using in? It's great but then it returns extra things you don't want. Then we have forEach which hits the sweet spot – it just does what we want and not more.

How about turning one array into another array? ForEach has more than we want. So instead use map. It's built to go from one array of a particular size and shape and go to another array of the same size and shape. Similarly filter and reduce just do something cleanly and well, without too many options that could go wrong. This is often called functional style programming although that is a loaded term.

ES6...ES2015...ES400000?

ES6 adds features like let and const, which give you some level of safety. If you try to overwrite a const it will blow up – which is what you want. Have it blow up when you are looking at it and not when you've given it to customers.

Immutability – this causes problems all the time in real code situations. You can add 'seamless-immutable' or 'immutable.js' which will give you immutability in JavaScript. Give these a try and see how you like them.

The kinds of things...

Flow – created by Facebook – You can ensure that a function that needs to take and return a string, and never anything else, will only accept a string. This gives you some really smart checking that you can run over your JS from the command line before you ship it. Gives a similar dev experience to a compiled language and broadly the ability to make use of type checking.

Type checking lets you cut down the potential for things to blow up – if something can only possibly work with numbers, letting strings pass in opens up the change for coercion errors. By being too accepting the potential for error is increased.

Rule of Least Power – use the lightest solution you can. Do not use the biggest stick when it's not required. Save it for when you need it.

JSON is a great example of something that is simple. It's limited to the things it should do. It refuses to do anything else. HTML and CSS are similar – there are limits to what you can do, because they should have those limits.

Use the least amount of power you can get away with, to get the most predictable result. The things you can't do will help avoid things you shouldn't do.

This is not so much a call to action as a call for consideration. These are things to think about.

tinyurl.com/wdc16-cantdo

@damncabbage

Elise Chant – Installable web apps with web app manifests

Installable apps aren't new – the iphone could always do it.

A web app manifest can enhance the experience of an Installed Web App.

What do we already know about installed apps?

  • We know that apps will open in its own world; and that closing it is really like minimising rather than completely closing it.
  • We know to get new apps you have to go an app store. It's often quite tedious. Sometimes we get distracted on the way to finding it. We know we have to wait for it to download...and wait...and then install more on first run…
  • We know that apps can also go onto the home page

All of these things communicate the idea: “my apps are the most important things on my phone”. We care less about the phone than the things the apps represent.

While we are focused on native apps, there are thousands upon thousands of installable web apps. But we've encouraged users to think of apps as being more important, or better than the mobile web?

Maybe we haven't been diligent enough making the mobile web really good? Maybe we spent too much time on IE8 and not enough on the mobile web?

Why are we ok that web apps are stuck inside a browser app? What if they could break out of the browser? What if we could keep it and name it like an app; and collect our favourite sites alongside our favourite apps?

The Web App Manifest specification is the tool for this job.

How can you use it?

  • Link to a JSON manifest file in the HEAD of your site
  • The JSON file sets up the configuration required to make the site installable
  • Some browsers like Chrome and Opera will prompt you to add the site if you visit it often or for a long time
  • This means your site now behaves like an app – it doesn't have the browser chrome, it doesn't let you navigate away

Can you still install sites without a manifest? Yes, but you don't get the same behaviours – they're just glorified bookmarks. It still opens in the browser, you don't get any real estate back.

The manifest allows you to set the app's icons (array of differently sized icons), name, theme, orientation, starting URL, colours of the app's context bar. Plus there are many more properties which are useful for various use cases – read up to see what you will get value from.

It's really easy to test manifests – use Chrome Canary's dev tools to get a manifest inspector.

So why hasn't the entire internet done this already?

Discoverability is still fairly poor. We need to do a better job of promoting the existence and value of installable web apps. We need more sites/apps promoting their installability.

Some browsers are taking the lead – Opera is showing ambient notifications like a small mobile icon when the current site is installable.

Microsoft is looking at finding installable sites via Bing; and adding them straight to their app store.

You can also look at some other specs which make mobile web apps more powerful – access to the hardware, sensors, location etc. The things people like about apps can be done on the web.

Start by making your app installable, then update from there – make it work offline, add more powerful features.

It's wise to be iterative. But the hardest thing is always to just start.

@elisechant

See also: Web App Manifest quick start — Medium

Hadi Michael – Memory management in V8

JavaScript's portability made it an attractive target for distributed machine learning. Of course it has its quirks and he had to solve the Math/number problems… but still had issues with performance and memory management. The browser was just dying while calculations were running – jank and crashing tabs and generally just not a good result.

Problems he had: heap size was increasing proportional to document size with lots of system objects. 30% of the processing time was being spent on garbage collection pauses, which just didn't keep enough processing time on the actual machine learning.

All values are stored on the heap. Memory can be represented as a graph with retaining links – the memory can only be released when it is not reachable by any retaining pointer/link. That is, it can't be GC if something still 'needs' it.

The more objects there are on the heap, the longer it takes to collect all the garbage.

V8 splits values between young and old generations – if something survives past the first pass it moves to the old generation; which survives GC and persists. The to/from memory spaces swap roles when one is cleared. Everything halts while this processing occurs.

Old GC works via mark (set a mark bit to show it's live) → sweep (flip the mark bit back for the next mark; release any that don't have the mark bit) →compact (memory compaction – a kind of memory defragmentation, to free up more contiguous memory). The old generation uses all the committed heap.

Orinoco adds optimisations to make GC components incremental, concurrent and parallel. Partly this is about using idle time for GC instead of halting processing at more critical times (jank!).

Understanding memory management is more than avoiding memory leaks, it's about finding optimisations.

hadi.io/code16 (direct link)

@hadi_michael

Simon Swain – Rats of the maze

A video posted by Simon Swain (@simonswain) on

Keep an eye out for the video of this one later, it's performance art :)

ratsofthemaze.com

@simon_swain

Fin!