Jekyll2021-02-15T13:06:48+00:00https://www.tijmen.cc/feed.xmlTijmen BrommetTijmen BrommetHow Shared Parental Leave worked for us2019-08-18T00:00:00+00:002019-08-18T00:00:00+00:00https://www.tijmen.cc/2019/08/18/shared-parental-leave<p>In November last year our son was born (he’s awesome).</p>
<p>We used Shared Parental Leave (SPL) to do parental leave together for the first 7 months after he was born. Looking back, SPL wasn’t very complicated to set up, but it took quite a bit of time to wrap my head around the rules, forms and procedures.</p>
<p>This is a write up of how we approached it, in case it’s useful to others (probably if you work in the Cabinet Office).</p>
<p>Just note that I’m not an expert on these issues, and might be misremembering things. For the actual facts, <a href="https://www.gov.uk/shared-parental-leave-and-pay">read the guide about Shared Parental Leave</a> on GOV.UK. There’s also a <a href="https://www.gov.uk/government/publications/civil-service-employee-policy-shared-parental-leave">guide for Civil Servants on having a baby</a>.</p>
<h2 id="plan">Plan</h2>
<p>Our plan for the first 7 months was fairly simple: do the first month after the birth together, then my partner 3 months solo, then me 3 months solo.</p>
<p>This first month didn’t affect SPL. The Cabinet Office gives new dads 12 of paternity leave. I took 3 days of annual leave, and 5 unpaid days.</p>
<h2 id="shared-parental-leave">Shared Parental Leave</h2>
<p>This is how I explain it to people: mothers usually get 52 weeks of maternity leave. If you want, you can convert a part of this into Shared Parental Leave, which you can use to take time off, either separately or together.</p>
<p>However, in our case only I was eligible for SPL because my partner switched jobs while being pregnant. This also meant that she only had a right to <a href="https://www.gov.uk/maternity-allowance">Maternity Allowance</a> (£148.68 per week), not <a href="https://www.gov.uk/maternity-pay-leave/pay">Statutory Maternity Pay</a>.</p>
<p>The way SPL works in the Cabinet Office, is that the dad can take the leave under the same conditions as the mother: the first 6 months are at full pay, the next 3 months are paid at statutory pay (less) and the last 3 months are unpaid.</p>
<p>What this would have meant for us, is that Nina would get 4 months Maternity Allowance, and I’d get 2 months full pay (filling up the 6 months) and 1 month of statutory pay.</p>
<p>Because the Maternity Allowance (£148.68 per week) is a lot lower than full pay we stopped the Maternity Allowance after 3 months. This meant all of my 3 months fell within the full pay period at the Cabinet Office. You can stop the Maternity Allowance by calling up DWP.</p>
<p>Of course, this whole thing would’ve gone way too smoothly if there hadn’t been some government weirdness. When we called DWP, the customer service disagreed with our plan, and told us it wasn’t necessary to stop the allowance. We decided to go against their advice because if we didn’t and they were wrong, we would’ve had to repay a months salary.</p>
<p>If you want to know more, feel free to <a href="mailto:tijmen@gmail.com">drop me an email</a>.</p>
<hr />
<p>More things to read:</p>
<ul>
<li>There’s more information on the <a href="https://sharedparentalleave.campaign.gov.uk/">SPL campaign page</a></li>
<li>Alice Barlett <a href="https://alicebartlett.co.uk/blog/shared-parental-leave">a super useful post about SPL</a></li>
<li>The Maternity Action website has <a href="https://maternityaction.org.uk/advice/shared-parental-leave-and-pay/">really good examples of entitlement</a></li>
</ul>Tijmen BrommetIn November last year our son was born (he’s awesome). We used Shared Parental Leave (SPL) to do parental leave together for the first 7 months after he was born. Looking back, SPL wasn’t very complicated to set up, but it took quite a bit of time to wrap my head around the rules, forms and procedures. This is a write up of how we approached it, in case it’s useful to others (probably if you work in the Cabinet Office). Just note that I’m not an expert on these issues, and might be misremembering things. For the actual facts, read the guide about Shared Parental Leave on GOV.UK. There’s also a guide for Civil Servants on having a baby. Plan Our plan for the first 7 months was fairly simple: do the first month after the birth together, then my partner 3 months solo, then me 3 months solo. This first month didn’t affect SPL. The Cabinet Office gives new dads 12 of paternity leave. I took 3 days of annual leave, and 5 unpaid days. Shared Parental Leave This is how I explain it to people: mothers usually get 52 weeks of maternity leave. If you want, you can convert a part of this into Shared Parental Leave, which you can use to take time off, either separately or together. However, in our case only I was eligible for SPL because my partner switched jobs while being pregnant. This also meant that she only had a right to Maternity Allowance (£148.68 per week), not Statutory Maternity Pay. The way SPL works in the Cabinet Office, is that the dad can take the leave under the same conditions as the mother: the first 6 months are at full pay, the next 3 months are paid at statutory pay (less) and the last 3 months are unpaid. What this would have meant for us, is that Nina would get 4 months Maternity Allowance, and I’d get 2 months full pay (filling up the 6 months) and 1 month of statutory pay. Because the Maternity Allowance (£148.68 per week) is a lot lower than full pay we stopped the Maternity Allowance after 3 months. This meant all of my 3 months fell within the full pay period at the Cabinet Office. You can stop the Maternity Allowance by calling up DWP. Of course, this whole thing would’ve gone way too smoothly if there hadn’t been some government weirdness. When we called DWP, the customer service disagreed with our plan, and told us it wasn’t necessary to stop the allowance. We decided to go against their advice because if we didn’t and they were wrong, we would’ve had to repay a months salary. If you want to know more, feel free to drop me an email. More things to read: There’s more information on the SPL campaign page Alice Barlett a super useful post about SPL The Maternity Action website has really good examples of entitlementLRUG: Reuse your government’s code2019-08-13T00:00:00+00:002019-08-13T00:00:00+00:00https://www.tijmen.cc/2019/08/13/lrug-coding-in-the-open<p>I gave a talk at LRUG yesterday about <a href="https://skillsmatter.com/skillscasts/14335-reuse-your-government-s-code">reusing code from GOV.UK</a>. The <a href="https://speakerdeck.com/tijmenb/reuse-your-governments-code">slides are on Speaker Deck</a>.</p>
<script async="" class="speakerdeck-embed" data-id="629c25f6557f46f0b6ce83d37fe7ccaf" data-ratio="1.77777777777778" src="//speakerdeck.com/assets/embed.js"></script>
<p>These are the GOV.UK projects mentioned in the talk:</p>
<h2 id="3-real-world-applications">3 real world applications</h2>
<ul>
<li><a href="https://github.com/alphagov/whitehall">Whitehall</a>, a really big app with a lot of history</li>
<li><a href="https://github.com/alphagov/content-data-api">Content Data API</a>, a data warehouse that uses a star schema database</li>
<li><a href="https://github.com/alphagov/content-publisher">Content Publisher</a>, a new app structured in novel ways</li>
</ul>
<h2 id="3-cool-patterns">3 cool patterns</h2>
<ul>
<li>Readable feature specs, as <a href="https://github.com/alphagov/content-publisher/tree/master/spec/features">shown in Content Publisher</a></li>
<li>A spam honey pot, as <a href="https://github.com/alphagov/feedback/search?q=giraffe&unscoped_q=giraffe">shown in the feedback application</a></li>
<li>A way to archive big tables, as <a href="https://github.com/alphagov/email-alert-api/pull/627/files">shown in email-alert-api</a></li>
</ul>
<h2 id="3-things-to-help-big-projects">3 things to help big projects</h2>
<ul>
<li>We configure lots of repos with a <a href="https://github.com/alphagov/govuk-saas-config/tree/master/github">script in govuk-saas-config</a></li>
<li>We share frontend code using components in <a href="https://github.com/alphagov/govuk_publishing_components">the govuk_publishing_components gem</a></li>
<li>We do some visual regression testing using <a href="https://github.com/alphagov/govuk-visual-regression">govuk-visual-regression</a></li>
</ul>
<h2 id="3-clever-team-tools">3 clever team tools</h2>
<ul>
<li><a href="https://github.com/binaryberry/seal">Seal of Approval</a> will remind you to review PRs</li>
<li><a href="https://github.com/emmabeynon/github-trello-poster">GitHub Trello Poster</a> will post your PR to Trello tickets</li>
<li><a href="https://github.com/alphagov/govuk-browser-extension">GOV.UK browser extension</a> allows you to switch between environments</li>
</ul>Tijmen BrommetI gave a talk at LRUG yesterday about reusing code from GOV.UK. The slides are on Speaker Deck. These are the GOV.UK projects mentioned in the talk: 3 real world applications Whitehall, a really big app with a lot of history Content Data API, a data warehouse that uses a star schema database Content Publisher, a new app structured in novel ways 3 cool patterns Readable feature specs, as shown in Content Publisher A spam honey pot, as shown in the feedback application A way to archive big tables, as shown in email-alert-api 3 things to help big projects We configure lots of repos with a script in govuk-saas-config We share frontend code using components in the govuk_publishing_components gem We do some visual regression testing using govuk-visual-regression 3 clever team tools Seal of Approval will remind you to review PRs GitHub Trello Poster will post your PR to Trello tickets GOV.UK browser extension allows you to switch between environmentsLook-a-Ryks: find your doppelganger in the Rijksmuseum collection (2017)2019-05-06T00:00:00+00:002019-05-06T00:00:00+00:00https://www.tijmen.cc/2019/05/06/look-a-ryks<p><a href="http://look-a-ryks.herokuapp.com">Look-a-Ryks</a> matches your face against portraits in the <a href="https://www.rijksmuseum.nl/en/rijksstudio">Rijksmuseum art collection</a>. It uses <a href="https://aws.amazon.com/rekognition">AWS Rekognition</a></p>
<p><img src="/media/2019-05-06-look-a-ryks-1.png" alt="" />
<em>Amazon thinks I look like the <a href="https://www.rijksmuseum.nl/en/collection/RP-P-1889-A-14501A">creepy older brother of painter Frederik Weissenbruch</a>.</em></p>
<p>A couple of years ago I ran into a picture of a <a href="https://boingboing.net/2017/05/11/public-private-surveillance.html">crashed facial recognition system</a> in a Oslo pizza place.</p>
<p>I thought it foreshadowed a pretty bleak world where you’ll be filmed, scanned, recognised, analysed and remembered in every store you’ll ever visit. This will be to provide a “superior customer service” of course, but we all <a href="https://www.bloomberg.com/news/articles/2017-05-19/uber-s-future-may-rely-on-predicting-how-much-you-re-willing-to-pay">know what happens next</a>.</p>
<p><img src="/media/2019-05-06-look-a-ryks-2.png" alt="" /></p>
<p>I wondered if we could come up with use cases for facial recognition that weren’t completely evil.</p>
<h2 id="idea">Idea</h2>
<p>First, we’ll need the facial recognition technology. This can be provided entirely by <a href="https://aws.amazon.com/rekognition">Amazon Rekognition</a>. This is the service <a href="https://techcrunch.com/2019/01/17/amazon-shareholders-want-the-company-to-stop-selling-facial-recognition-to-law-enforcement">under fire for selling to US law enforcement agencies</a>.</p>
<p>Secondly, we’ll need a massive dataset of faces to play with. Luckily, the Rijksmuseum in Amsterdam has an amazing API that <a href="https://www.rijksmuseum.nl/en/api">exposes almost 600.000 works of art</a>.</p>
<p>I named it <a href="http://look-a-ryks.herokuapp.com">Look-a-Ryks</a>.</p>
<p><img src="/media/2019-05-06-look-a-ryks-3.jpeg" alt="" />
<em>The actual Rijksmuseum</em></p>
<h2 id="how-it-works">How it works</h2>
<p>To start us off we’ll need all relevant objects from the collection. I decided to go for the easy route by downloading all data for the search term “portret” (“portrait”), of which there are 37,000.</p>
<p>Using some one-off scripts on an EC2 instance, I downloaded all of the images, put them in an S3 bucket and then forwarded them to Rekognition. The <a href="https://github.com/tijmenb/rijksmuseum-aws-experiment">code for the download and upload is here</a>, but it’s not super reproducible.</p>
<p>After this, I created a <a href="https://github.com/tijmenb/look-a-ryks">small application</a> where users can upload a photo and see their doppelganger from the Rijksmuseum collections.</p>
<p><img src="/media/2019-05-06-look-a-ryks-4.png" alt="" /></p>
<h2 id="the-result">The result</h2>
<p>For a lot of people, the application manages to find a portrait that at least has a passing resemblence. It was enough for a delivery manager in GDS to print out the doppelgangers of team members for on the wall.</p>
<p><img src="/media/2019-05-06-look-a-ryks-5.jpeg" alt="" /></p>Tijmen BrommetLook-a-Ryks matches your face against portraits in the Rijksmuseum art collection. It uses AWS Rekognition Amazon thinks I look like the creepy older brother of painter Frederik Weissenbruch. A couple of years ago I ran into a picture of a crashed facial recognition system in a Oslo pizza place. I thought it foreshadowed a pretty bleak world where you’ll be filmed, scanned, recognised, analysed and remembered in every store you’ll ever visit. This will be to provide a “superior customer service” of course, but we all know what happens next. I wondered if we could come up with use cases for facial recognition that weren’t completely evil. Idea First, we’ll need the facial recognition technology. This can be provided entirely by Amazon Rekognition. This is the service under fire for selling to US law enforcement agencies. Secondly, we’ll need a massive dataset of faces to play with. Luckily, the Rijksmuseum in Amsterdam has an amazing API that exposes almost 600.000 works of art. I named it Look-a-Ryks. The actual Rijksmuseum How it works To start us off we’ll need all relevant objects from the collection. I decided to go for the easy route by downloading all data for the search term “portret” (“portrait”), of which there are 37,000. Using some one-off scripts on an EC2 instance, I downloaded all of the images, put them in an S3 bucket and then forwarded them to Rekognition. The code for the download and upload is here, but it’s not super reproducible. After this, I created a small application where users can upload a photo and see their doppelganger from the Rijksmuseum collections. The result For a lot of people, the application manages to find a portrait that at least has a passing resemblence. It was enough for a delivery manager in GDS to print out the doppelgangers of team members for on the wall.How to code review a Rails application2019-02-03T00:00:00+00:002019-02-03T00:00:00+00:00https://www.tijmen.cc/2019/02/03/code-review<p>Monday I spent the day at the offices of a government department. A few weeks ago I was asked to do a code review of a new application that they had built, specifically looking at security.</p>
<p>The application is built using Rails on Ruby - which currently isn’t in use at the department. I’ve never written a report on a Rails app, so I’m still figuring out how to best approach the code review. What I’ve come up with so far:</p>
<h3 id="authentication">Authentication</h3>
<p>How users sign in to the app, and which parts of the app can only be accessed by signing in. Look at the authentication library in use (a well known library like <a href="https://github.com/plataformatec/devise">devise</a> is probably best)</p>
<h3 id="authorisation">Authorisation</h3>
<p>Whether there are different roles or permissions for users, and what actions they can perform.</p>
<h3 id="dependencies">Dependencies</h3>
<p>Which dependencies are used, are they up to date, and how trusted are they. Points for having Dependabot running.</p>
<h3 id="rails-configuration">Rails configuration</h3>
<p>Whether the application is configured with security in mind (mostly this would be following Rails’ defaults).</p>
<h3 id="advanced-rails-security">Advanced Rails security</h3>
<p>Whether the application uses Rails’ advanced security features like Content Security Policy (CSP) and protection against Cross-Site Request Forgery (CSRF).</p>
<h3 id="cookies">Cookies</h3>
<p>Whether the session cookies are secure, and whether the application uses additional cookies.</p>
<h3 id="exception-reporting">Exception reporting</h3>
<p>If error reporting to something like Airbrake or Sentry is set up, and if it’s configured to sanitise data before sending it.</p>
<h3 id="linting">Linting</h3>
<p>Whether the code is linted on CI. A linter Rubocop can catch bugs in development, and bugs can often lead to security vulnerabilities.</p>
<h3 id="security-tests">Security tests</h3>
<p>Whether there are any integration or unit tests relating to security, and their coverage.</p>Tijmen BrommetMonday I spent the day at the offices of a government department. A few weeks ago I was asked to do a code review of a new application that they had built, specifically looking at security. The application is built using Rails on Ruby - which currently isn’t in use at the department. I’ve never written a report on a Rails app, so I’m still figuring out how to best approach the code review. What I’ve come up with so far: Authentication How users sign in to the app, and which parts of the app can only be accessed by signing in. Look at the authentication library in use (a well known library like devise is probably best) Authorisation Whether there are different roles or permissions for users, and what actions they can perform. Dependencies Which dependencies are used, are they up to date, and how trusted are they. Points for having Dependabot running. Rails configuration Whether the application is configured with security in mind (mostly this would be following Rails’ defaults). Advanced Rails security Whether the application uses Rails’ advanced security features like Content Security Policy (CSP) and protection against Cross-Site Request Forgery (CSRF). Cookies Whether the session cookies are secure, and whether the application uses additional cookies. Exception reporting If error reporting to something like Airbrake or Sentry is set up, and if it’s configured to sanitise data before sending it. Linting Whether the code is linted on CI. A linter Rubocop can catch bugs in development, and bugs can often lead to security vulnerabilities. Security tests Whether there are any integration or unit tests relating to security, and their coverage.Improving architectural introductions2019-01-31T00:00:00+00:002019-01-31T00:00:00+00:00https://www.tijmen.cc/2019/01/31/architectural-intros<p>Over the last year or so I’ve been doing “Intro to GOV.UK architecture” sessions every quarter. The audience is people who are new to GOV.UK and people who like a refresher.</p>
<p>We do this because GOV.UK publishing system is fairly complex — it consists of ~80 different components. Our <a href="https://docs.publishing.service.gov.uk/manual/architecture.html">reference architecture diagram</a> looks like this at the moment:</p>
<p><img src="/media/architecture-intros/1.png" alt="" /></p>
<p>The format is simple: I draw GOV.UK’s applications on a whiteboard, and I explain how content goes from a publishing app to the website, touching on things like email alerts, search, our Content Delivery Network (CDN).</p>
<p>I’ve iterated on the presentations a lot. These are some of the improvements I’ve made over time:</p>
<h2 id="1-ask-for-expectations">1. Ask for expectations</h2>
<p>When people arrive, I ask them to write down their expectations on a post-it. This is a good starter because people will always trickle in in the first 5 minutes and this allows the people who join early to start thinking about the session (and the late people to join in easily, as they haven’t missed anything yet).</p>
<p>Once everybody has written down their hopes, I take the post-its, and read them out loud while putting them up on the wall. I make sure to call out anything that I definitely will cover, and the things that I won’t. At the end of the session I’ll check in on the post-its and ask if all the questions have been answered.</p>
<figure>
<img src="/media/architecture-intros/2.jpeg" />
<figcaption>Picture of the end result in July 2018 — everything was handwritten and I tried to cover absolutely everything</figcaption>
</figure>
<h2 id="2-shorter-sessions">2. Shorter sessions</h2>
<p>The sessions for developers used to run 90 minutes and sometimes they ran even longer.</p>
<p>While I still feel this is the correct amount of time, I’ve concluded that running anything over an hour is essentially useless because people won’t be able to recall much more afterwards, and risk getting super confused.</p>
<h2 id="3-neat-drawing">3. Neat drawing</h2>
<p>Instead of hand writing all of the components on the wall, I now have a screenshot of GOV.UK to point to, and color-coded cards for each component in the system. This makes the board much easier to read, my handwriting is terrible, and allows me to re-layout the diagram if necessary.</p>
<figure>
<img src="/media/architecture-intros/3.jpeg" />
<figcaption>Some improvements in August 2018 — I started experimenting with adding coloured notes</figcaption>
</figure>
<h2 id="4-explain-fewer-things">4. Explain fewer things</h2>
<p>I also cover a lot less things in the hour. I used to strive for completion, so let people see the entire ecosystem of GOV.UK, with all the edge cases, weird stuff and lesser components. Instead I now try to cover up edge cases, and focus more on the core applications.</p>
<h2 id="5-ask-for-feedback">5. Ask for feedback</h2>
<p>Immediately after the session I send out a Google Form with 3 questions:</p>
<ul>
<li>What did you like about the session? (optional)</li>
<li>What would it make it even better? (optional)</li>
<li>Your name (optional)</li>
</ul>
<p>The feedback has been invaluable in understanding which parts to focus on and what to improve.</p>
<figure>
<img src="/media/architecture-intros/4.jpeg" />
<figcaption>The latest iteration in January 2019 — neat printouts for most of the components</figcaption>
</figure>Tijmen BrommetOver the last year or so I’ve been doing “Intro to GOV.UK architecture” sessions every quarter. The audience is people who are new to GOV.UK and people who like a refresher. We do this because GOV.UK publishing system is fairly complex — it consists of ~80 different components. Our reference architecture diagram looks like this at the moment: The format is simple: I draw GOV.UK’s applications on a whiteboard, and I explain how content goes from a publishing app to the website, touching on things like email alerts, search, our Content Delivery Network (CDN). I’ve iterated on the presentations a lot. These are some of the improvements I’ve made over time: 1. Ask for expectations When people arrive, I ask them to write down their expectations on a post-it. This is a good starter because people will always trickle in in the first 5 minutes and this allows the people who join early to start thinking about the session (and the late people to join in easily, as they haven’t missed anything yet). Once everybody has written down their hopes, I take the post-its, and read them out loud while putting them up on the wall. I make sure to call out anything that I definitely will cover, and the things that I won’t. At the end of the session I’ll check in on the post-its and ask if all the questions have been answered. Picture of the end result in July 2018 — everything was handwritten and I tried to cover absolutely everything 2. Shorter sessions The sessions for developers used to run 90 minutes and sometimes they ran even longer. While I still feel this is the correct amount of time, I’ve concluded that running anything over an hour is essentially useless because people won’t be able to recall much more afterwards, and risk getting super confused. 3. Neat drawing Instead of hand writing all of the components on the wall, I now have a screenshot of GOV.UK to point to, and color-coded cards for each component in the system. This makes the board much easier to read, my handwriting is terrible, and allows me to re-layout the diagram if necessary. Some improvements in August 2018 — I started experimenting with adding coloured notes 4. Explain fewer things I also cover a lot less things in the hour. I used to strive for completion, so let people see the entire ecosystem of GOV.UK, with all the edge cases, weird stuff and lesser components. Instead I now try to cover up edge cases, and focus more on the core applications. 5. Ask for feedback Immediately after the session I send out a Google Form with 3 questions: What did you like about the session? (optional) What would it make it even better? (optional) Your name (optional) The feedback has been invaluable in understanding which parts to focus on and what to improve. The latest iteration in January 2019 — neat printouts for most of the componentsHow robots help keep GOV.UK up to date2018-10-02T00:00:00+00:002018-10-02T00:00:00+00:00https://www.tijmen.cc/2018/10/02/how-robots-keep-govuk-up-to-date<p>The first commit to GOV.UK is almost 8 years old and we currently run around 60 applications. Over the last 6 months, we’ve used an automated tool to keep our applications up to date.</p>
<p>In 2017, we wrote a <a href="https://github.com/alphagov/govuk-dependency-analysis">little tool to analyse and visualise dependencies</a> across our applications. One of the things we did was generate a treemap to show all dependencies and their versions. This showed we were using 487 different dependencies, in 1179 different versions - for example, we were running 13 distinct versions of Rails.</p>
<p><img src="/media/2018-10-02-1.png" alt="" /></p>
<p>A lot of these dependencies were running in different versions. For example, the diagram above shows that we were using 13 different versions of Ruby on Rails, and 9 different versions of Unicorn (our web server).</p>
<p>Updating dependencies becomes more difficult over time. An application with many outdated dependencies make updating complicated. This is because a lot of code will depend on specific dependency versions, which may even depend on other outdated dependencies. It’s like moving house by dragging out all bookcases with books. It’s possible but really tiring and slow.</p>
<p>Our team needed to find a way to consolidate our dependencies so we could:</p>
<ul>
<li>follow the <a href="https://docs.publishing.service.gov.uk/manual/keeping-software-current.html">GOV.UK policy</a> of keeping our dependencies up to date</li>
<li>react quickly to security updates</li>
<li>save developers time and frustration by letting them use the same version of dependencies</li>
</ul>
<h2 id="choosing-a-tool-to-help-us-update-dependencies">Choosing a tool to help us update dependencies</h2>
<p>We decided on a dual approach to fix our problem. Our first priority was to get all our applications on the latest version of Ruby on Rails. We also wanted to create a system that would save us from taking a big-bang approach.</p>
<p>We decided to investigate a number of tools that do automatic dependency updating. These tools all work in a similar way by keeping track of the dependencies of an application. When a new version of the dependency appears, a Pull Request is raised against the repository. Tools will often add changelog information in the Pull Request.</p>
<p><img src="/media/2018-10-02-3.png" alt="" /></p>
<p>After evaluating Snyk, Depfu and Dependabot, we decided on Dependabot, because it’s <a href="https://github.com/dependabot/dependabot-core">partially open source</a>.</p>
<p>The first stage was a lot of hard work. We estimated Dependabot made around 2,000 Pull Requests to get everything up to the latest version.</p>
<h2 id="benefits-were-seeing">Benefits we’re seeing</h2>
<h3 id="1-everything-is-more-up-to-date">1. Everything is more up to date</h3>
<p>In total, we’ve decreased the number of different dependency versions we’re running by a quarter, even if we’ve increased the actual number of applications:</p>
<p>In September 2017 our applications had a combined total of 1179 different gem versions. In September 2018 this had decreased to 871 different versions, a -26% decrease.</p>
<p><img src="/media/2018-10-02-2.png" alt="" /></p>
<h3 id="2-weve-increased-architectural-flexibility">2. We’ve increased architectural flexibility</h3>
<p>When choosing whether to expose functionality in a microservice or in a shared dependency, we now more often choose putting the functionality in a Ruby gem. if developers had to manually update all the applications for each version bump this would be really expensive. This allows for easier testing and development.</p>
<h3 id="3-upgrade-cycles-are-easier">3. Upgrade cycles are easier</h3>
<p>Because we now get notified when a new version of a dependency arrives, it’s much easier for a single developer to review the changelog, make any changes to the applications, and apply the upgrade. Previously, an upgrade might be spread out across time, and be done by multiple people.</p>Tijmen BrommetThe first commit to GOV.UK is almost 8 years old and we currently run around 60 applications. Over the last 6 months, we’ve used an automated tool to keep our applications up to date. In 2017, we wrote a little tool to analyse and visualise dependencies across our applications. One of the things we did was generate a treemap to show all dependencies and their versions. This showed we were using 487 different dependencies, in 1179 different versions - for example, we were running 13 distinct versions of Rails. A lot of these dependencies were running in different versions. For example, the diagram above shows that we were using 13 different versions of Ruby on Rails, and 9 different versions of Unicorn (our web server). Updating dependencies becomes more difficult over time. An application with many outdated dependencies make updating complicated. This is because a lot of code will depend on specific dependency versions, which may even depend on other outdated dependencies. It’s like moving house by dragging out all bookcases with books. It’s possible but really tiring and slow. Our team needed to find a way to consolidate our dependencies so we could: follow the GOV.UK policy of keeping our dependencies up to date react quickly to security updates save developers time and frustration by letting them use the same version of dependencies Choosing a tool to help us update dependencies We decided on a dual approach to fix our problem. Our first priority was to get all our applications on the latest version of Ruby on Rails. We also wanted to create a system that would save us from taking a big-bang approach. We decided to investigate a number of tools that do automatic dependency updating. These tools all work in a similar way by keeping track of the dependencies of an application. When a new version of the dependency appears, a Pull Request is raised against the repository. Tools will often add changelog information in the Pull Request. After evaluating Snyk, Depfu and Dependabot, we decided on Dependabot, because it’s partially open source. The first stage was a lot of hard work. We estimated Dependabot made around 2,000 Pull Requests to get everything up to the latest version. Benefits we’re seeing 1. Everything is more up to date In total, we’ve decreased the number of different dependency versions we’re running by a quarter, even if we’ve increased the actual number of applications: In September 2017 our applications had a combined total of 1179 different gem versions. In September 2018 this had decreased to 871 different versions, a -26% decrease. 2. We’ve increased architectural flexibility When choosing whether to expose functionality in a microservice or in a shared dependency, we now more often choose putting the functionality in a Ruby gem. if developers had to manually update all the applications for each version bump this would be really expensive. This allows for easier testing and development. 3. Upgrade cycles are easier Because we now get notified when a new version of a dependency arrives, it’s much easier for a single developer to review the changelog, make any changes to the applications, and apply the upgrade. Previously, an upgrade might be spread out across time, and be done by multiple people.My LRUG talk about documentation2018-03-09T00:00:00+00:002018-03-09T00:00:00+00:00https://www.tijmen.cc/2018/03/09/lrug-talk<p>I’ve previously written about <a href="/2017/07/05/keeping-docs-current.html">how we keep developer docs up to date</a> on GOV.UK, and how we use <a href="/2017/06/11/docs-review-system.html">a review system to make it easier</a>.</p>
<p>Last November I gave a talk at the <a href="http://lrug.org/">London Ruby User Group (LRUG)</a> covering the same.</p>
<p>You can <a href="https://skillsmatter.com/skillscasts/11153-5-ways-to-keep-docs-up-to-date">watch it on the Skillsmatter website</a>. I’ve put the <a href="https://speakerdeck.com/tijmenb/gov-dot-uk-developer-docs">slides up on Speaker Deck</a>:</p>
<script async="" class="speakerdeck-embed" data-id="cf92ba2e725c4900a1dbb2aa141a3555" data-ratio="1.77777777777778" src="//speakerdeck.com/assets/embed.js"></script>Tijmen BrommetI’ve previously written about how we keep developer docs up to date on GOV.UK, and how we use a review system to make it easier. Last November I gave a talk at the London Ruby User Group (LRUG) covering the same. You can watch it on the Skillsmatter website. I’ve put the slides up on Speaker Deck:Using Fastly for A/B testing2017-10-16T00:00:00+00:002017-10-16T00:00:00+00:00https://www.tijmen.cc/2017/10/16/ab-testing-fastly<p>Since last year we’re running A/B tests on GOV.UK using Fastly, our CDN.</p>
<h2 id="ab-testing">A/B testing</h2>
<p>Most A/B testing usually takes place using some kind of server-side implementation.</p>
<p>When the user requests the page, the server will place the user in the <code class="language-plaintext highlighter-rouge">A</code> or <code class="language-plaintext highlighter-rouge">B</code> bucket and serve them the appropriate version. To make sure that the user will see the same version the next time a cookie is placed.</p>
<p>Examples of this method for Rails are <a href="https://github.com/splitrb/split">Split</a> and <a href="https://github.com/assaf/vanity">Vanity</a>.</p>
<h2 id="ab-with-a-cdn">A/B with a CDN</h2>
<p>On GOV.UK this system won’t work. To make the site fast and always available to users, we’re using <a href="https://www.fastly.com/">Fastly</a> a Content Delivery Network (CDN). This means that most requests to www.gov.uk are served from a cache rather than our own servers.</p>
<p>Having most pages cached makes the site super fast, but it makes a straightforward server side implementation impossible. Without any extra configuration, you would end of caching the A or B version, and serving that for however long your cache TTL is.</p>
<p>However, it’s possible to move the A/B testing to the CDN level. This is how that works:</p>
<ul>
<li>Fastly determines if the user sees the A or the B variant</li>
<li>Instead of a cookie, Fastly sends the cookie variant to origin with a HTTP header</li>
<li>The application reads this header and responds with the chosen variant, as well as a <code class="language-plaintext highlighter-rouge">Vary</code> HTTP header</li>
<li>Because of the <code class="language-plaintext highlighter-rouge">Vary</code> header, Fastly uses a separate cache for each variant</li>
<li>Finally, Fastly sets a cookie so that it can show the same version next time</li>
</ul>
<h2 id="configuring-fastly">Configuring Fastly</h2>
<p>To get what we wanted meant we had to build the A/B testing code to work with the CDN. This has 3 parts: we have to configure cookies, determine the correct page to show and get the correct page cached by the CDN.</p>
<p>Our CDN is configured using the Varnish Configuration Language (VCL). The following is the VCL to configure A/B tests, simplified a bit.</p>
<div class="language-pl highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">sub </span><span class="nf">vcl_recv</span> <span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">Cookie</span> <span class="o">~</span> <span class="p">"</span><span class="s2">ABTest-Example</span><span class="p">")</span> <span class="p">{</span>
<span class="nv">set</span> <span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">MyABTest</span> <span class="o">=</span> <span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">Cookie:ABTest</span><span class="o">-</span><span class="nv">Example</span><span class="p">;</span>
<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="nv">randombool</span><span class="p">(</span><span class="mi">5</span><span class="p">,</span><span class="mi">10</span><span class="p">))</span> <span class="p">{</span>
<span class="nv">set</span> <span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">MyABTest</span> <span class="o">=</span> <span class="p">"</span><span class="s2">B</span><span class="p">";</span>
<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
<span class="nv">set</span> <span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">MyABTest</span> <span class="o">=</span> <span class="p">"</span><span class="s2">A</span><span class="p">";</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="k">sub </span><span class="nf">vcl_deliver</span> <span class="p">{</span>
<span class="nv">add</span> <span class="nv">resp</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">Set</span><span class="o">-</span><span class="nv">Cookie</span> <span class="o">=</span> <span class="p">"</span><span class="s2">ABTest-Example=</span><span class="p">"</span> <span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">MyABTest</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Let’s look at it in detail:</p>
<h3 id="use-the-cookie-if-set">Use the cookie if set</h3>
<p>First, we check if the user already has a cookie. If so, use the value of the
cookie in the HTTP header.</p>
<div class="language-pl highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span> <span class="p">(</span><span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">Cookie</span> <span class="o">~</span> <span class="p">"</span><span class="s2">ABTest-Example</span><span class="p">")</span> <span class="p">{</span>
<span class="nv">set</span> <span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">MyABTest</span> <span class="o">=</span> <span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">Cookie:ABTest</span><span class="o">-</span><span class="nv">Example</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<h3 id="pick-a-random-bucket-otherwise">Pick a random bucket otherwise</h3>
<p>The first step is to get people into the correct bucket:</p>
<div class="language-pl highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span> <span class="p">(</span><span class="nv">randombool</span><span class="p">(</span><span class="mi">5</span><span class="p">,</span><span class="mi">10</span><span class="p">))</span> <span class="p">{</span>
<span class="nv">set</span> <span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">MyABTest</span> <span class="o">=</span> <span class="p">"</span><span class="s2">B</span><span class="p">";</span>
<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
<span class="nv">set</span> <span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">MyABTest</span> <span class="o">=</span> <span class="p">"</span><span class="s2">A</span><span class="p">";</span>
<span class="p">}</span>
</code></pre></div></div>
<p>This <code class="language-plaintext highlighter-rouge">randombool(5,10)</code> function in VCL will return true 50% of the time.</p>
<p>This makes Varnish send a HTTP header called <code class="language-plaintext highlighter-rouge">MyABTest</code> with the variant to your server. This means we can switch the template in the application.</p>
<h3 id="serve-a-variant">Serve a variant</h3>
<p>In your application you can now change the variant on the basis of the HTTP header:</p>
<div class="language-rb highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span> <span class="n">request</span><span class="p">.</span><span class="nf">headers</span><span class="p">[</span><span class="s2">"MyABTest"</span><span class="p">]</span> <span class="o">==</span> <span class="s2">"B"</span>
<span class="n">render</span> <span class="s2">"b_template"</span>
<span class="k">else</span>
<span class="n">render</span> <span class="s2">"default_template"</span>
<span class="k">end</span>
</code></pre></div></div>
<h3 id="caching-behaviour">Caching behaviour</h3>
<p>But at this point we would still have a problem: Varnish would cache the page once, no matter if A or B was activated.</p>
<p>To counter this, we use the <code class="language-plaintext highlighter-rouge">HTTP Vary</code> header. <code class="language-plaintext highlighter-rouge">Vary</code> is a clever header that basically creates a cache per thing. For example, setting <code class="language-plaintext highlighter-rouge">Vary: User-Agent</code> will make the cache save copies per use agent, so that we save a different version depending on the user’s browser.</p>
<p>We can use this to create a cache for each bucket:</p>
<div class="language-rb highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># application code</span>
<span class="n">response_headers</span><span class="p">[</span><span class="s2">"Vary"</span><span class="p">]</span> <span class="o">=</span> <span class="s2">"MyABTest"</span>
</code></pre></div></div>
<h3 id="cookies">Cookies</h3>
<p>Then finally, we’re going to have to make sure that we set a cookie for the user. In VCL:</p>
<div class="language-pl highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">add</span> <span class="nv">resp</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">Set</span><span class="o">-</span><span class="nv">Cookie</span> <span class="o">=</span> <span class="p">"</span><span class="s2">ABTest-Example=</span><span class="p">"</span> <span class="nv">req</span><span class="o">.</span><span class="nv">http</span><span class="o">.</span><span class="nv">MyABTest</span><span class="p">;</span>
</code></pre></div></div>
<p>This will set a cookie with the variant the user has seen, so that the next time we can show the same version to the user.</p>
<h2 id="more-reading">More reading</h2>
<ul>
<li><a href="https://docs.publishing.service.gov.uk/manual/ab-testing.html">GOV.UK documentation on A/B tests</a></li>
<li><a href="https://www.fastly.com/blog/ab-testing-edge">A/B testing at the edge</a></li>
</ul>Tijmen BrommetSince last year we’re running A/B tests on GOV.UK using Fastly, our CDN. A/B testing Most A/B testing usually takes place using some kind of server-side implementation. When the user requests the page, the server will place the user in the A or B bucket and serve them the appropriate version. To make sure that the user will see the same version the next time a cookie is placed. Examples of this method for Rails are Split and Vanity. A/B with a CDN On GOV.UK this system won’t work. To make the site fast and always available to users, we’re using Fastly a Content Delivery Network (CDN). This means that most requests to www.gov.uk are served from a cache rather than our own servers. Having most pages cached makes the site super fast, but it makes a straightforward server side implementation impossible. Without any extra configuration, you would end of caching the A or B version, and serving that for however long your cache TTL is. However, it’s possible to move the A/B testing to the CDN level. This is how that works: Fastly determines if the user sees the A or the B variant Instead of a cookie, Fastly sends the cookie variant to origin with a HTTP header The application reads this header and responds with the chosen variant, as well as a Vary HTTP header Because of the Vary header, Fastly uses a separate cache for each variant Finally, Fastly sets a cookie so that it can show the same version next time Configuring Fastly To get what we wanted meant we had to build the A/B testing code to work with the CDN. This has 3 parts: we have to configure cookies, determine the correct page to show and get the correct page cached by the CDN. Our CDN is configured using the Varnish Configuration Language (VCL). The following is the VCL to configure A/B tests, simplified a bit. sub vcl_recv { if (req.http.Cookie ~ "ABTest-Example") { set req.http.MyABTest = req.http.Cookie:ABTest-Example; } else { if (randombool(5,10)) { set req.http.MyABTest = "B"; } else { set req.http.MyABTest = "A"; } } } sub vcl_deliver { add resp.http.Set-Cookie = "ABTest-Example=" req.http.MyABTest; } Let’s look at it in detail: Use the cookie if set First, we check if the user already has a cookie. If so, use the value of the cookie in the HTTP header. if (req.http.Cookie ~ "ABTest-Example") { set req.http.MyABTest = req.http.Cookie:ABTest-Example; } Pick a random bucket otherwise The first step is to get people into the correct bucket: if (randombool(5,10)) { set req.http.MyABTest = "B"; } else { set req.http.MyABTest = "A"; } This randombool(5,10) function in VCL will return true 50% of the time. This makes Varnish send a HTTP header called MyABTest with the variant to your server. This means we can switch the template in the application. Serve a variant In your application you can now change the variant on the basis of the HTTP header: if request.headers["MyABTest"] == "B" render "b_template" else render "default_template" end Caching behaviour But at this point we would still have a problem: Varnish would cache the page once, no matter if A or B was activated. To counter this, we use the HTTP Vary header. Vary is a clever header that basically creates a cache per thing. For example, setting Vary: User-Agent will make the cache save copies per use agent, so that we save a different version depending on the user’s browser. We can use this to create a cache for each bucket: # application code response_headers["Vary"] = "MyABTest" Cookies Then finally, we’re going to have to make sure that we set a cookie for the user. In VCL: add resp.http.Set-Cookie = "ABTest-Example=" req.http.MyABTest; This will set a cookie with the variant the user has seen, so that the next time we can show the same version to the user. More reading GOV.UK documentation on A/B tests A/B testing at the edgeHow we used data science to accelerate our taxonomy creation2017-09-17T00:00:00+00:002017-09-17T00:00:00+00:00https://www.tijmen.cc/2017/09/17/data-science<p>(Written by <a href="https://twitter.com/whoojemaflip">Tom Gladhill</a> & me)</p>
<p>We’re working to improve the way GOV.UK goes about <a href="https://insidegovuk.blog.gov.uk/2016/04/14/building-a-new-tagging-infrastructure-for-gov-uk/">building a navigation taxonomy</a>. The taxonomy will cover every area of government content. from education to transport. When finished it will <a href="https://insidegovuk.blog.gov.uk/2015/10/27/improving-navigation-on-gov-uk/">help our users to navigate</a> the large volume of content on the site.</p>
<p>There is now over 300,000 pages of content on www.gov.uk - uploaded by over 1,000 government departments and agencies. One of the challenges we face as a team is to understand what this content is really about. Initially, we thought that by reading each page and jotting some notes down we’d form a good overall understanding of the nature of the content. But when we sat down with a list of all of the content relating to even a narrow theme, we quickly realised there was no way our small team of developers and content designers could get through every item. We were working towards developing a taxonomy for the environmental content on GOV.UK, and this theme alone contained months of reading.</p>
<p>So we put our heads together and decided that, as per the <a href="https://en.wikipedia.org/wiki/Pareto_principle">pareto principle</a>, we should find around 80% of the theme’s concepts in just 20% of the pages. That’d be enough to get us started on building the branch of the taxonomy for environment, and we should naturally discover the rest of the concepts later on. We thought that if we tackled the most viewed pages first, we would have the highest chance of discovering all of the most important concepts. We were still looking at a list of almost 6,000 pages at this point, but it was a definite improvement. Even a small reduction in the number of pages to review would greatly improve our team’s velocity of creating this taxonomy.</p>
<p>We set out to further reduce this number. Based on the <a href="https://insidegovuk.blog.gov.uk/2017/03/21/presenting-our-new-taxonomy-beta">previously mapped and tagged Education theme</a>, we looked at the rate of new concept discovery. Below is a plot of the total number of new concepts found against the number of pages reviewed for a single branch of our taxonomy. You can see the line flatten towards the upper right of the graph, as the rate of new concept discovery falls off. Would it be possible to predict when we could stop reviewing pages, based on this trend?</p>
<p><img src="/media/1-concept-discovery.png" alt="" /></p>
<h2 id="patterns-in-the-data">Patterns in the data</h2>
<p>What we found was that while it’s hard to predict when enough is enough from this data, we could see a pattern. The content list was originally ordered by the number of pageviews on the site: an ordering that actually compared poorly to random. But, we asked, what if we could find the order that allowed us to generate a sufficient number of concepts by reviewing the least number of pages?</p>
<p><img src="/media/2-patterns.png" alt="" /></p>
<h2 id="inverse-similarity-selection">‘Inverse similarity’ selection</h2>
<p>That led us to this question: can we use <a href="https://en.wikipedia.org/wiki/Machine_learning">machine learning</a> to help pick the most different pages within a theme? If the answer was yes then the concept finding exercise should take less time, as there’d be fewer pages on the same topic to review.</p>
<p>As it happens, last year our team did a couple of experiments with natural language processing. Helped by the data team at GDS, we investigated the <a href="https://gdsdata.blog.gov.uk/2017/01/12/using-data-science-to-build-a-taxonomy-for-gov-uk/">usefulness of using machine learning to generate taxonomies</a>. This technology now provided us with a new way to tackle the problem in front of us. We had learned what we needed from these past experiments, and successfully applied this knowledge to this newly understood problem.</p>
<p>We trialled this approach against the content tagged to the education branch of the taxonomy, and found <a href="https://github.com/alphagov/govuk-inverse-similarity">our new solution</a> performed exactly as we’d hoped. We think the number of pages that need to be read has fallen from 20% to less than 5%, and we have a repeatable process that can be applied to future content themes.</p>
<p><img src="/media/3-inverse-similarity.png" alt="" /></p>
<h2 id="trying-it-out-in-anger">Trying it out in anger</h2>
<p>The pages are currently being used to generate the terms for our transport theme. This is turning out to be really useful. We may integrate this algorithm into any tools we build to support the taxonomy generation process.</p>Tijmen Brommet(Written by Tom Gladhill & me) We’re working to improve the way GOV.UK goes about building a navigation taxonomy. The taxonomy will cover every area of government content. from education to transport. When finished it will help our users to navigate the large volume of content on the site. There is now over 300,000 pages of content on www.gov.uk - uploaded by over 1,000 government departments and agencies. One of the challenges we face as a team is to understand what this content is really about. Initially, we thought that by reading each page and jotting some notes down we’d form a good overall understanding of the nature of the content. But when we sat down with a list of all of the content relating to even a narrow theme, we quickly realised there was no way our small team of developers and content designers could get through every item. We were working towards developing a taxonomy for the environmental content on GOV.UK, and this theme alone contained months of reading. So we put our heads together and decided that, as per the pareto principle, we should find around 80% of the theme’s concepts in just 20% of the pages. That’d be enough to get us started on building the branch of the taxonomy for environment, and we should naturally discover the rest of the concepts later on. We thought that if we tackled the most viewed pages first, we would have the highest chance of discovering all of the most important concepts. We were still looking at a list of almost 6,000 pages at this point, but it was a definite improvement. Even a small reduction in the number of pages to review would greatly improve our team’s velocity of creating this taxonomy. We set out to further reduce this number. Based on the previously mapped and tagged Education theme, we looked at the rate of new concept discovery. Below is a plot of the total number of new concepts found against the number of pages reviewed for a single branch of our taxonomy. You can see the line flatten towards the upper right of the graph, as the rate of new concept discovery falls off. Would it be possible to predict when we could stop reviewing pages, based on this trend? Patterns in the data What we found was that while it’s hard to predict when enough is enough from this data, we could see a pattern. The content list was originally ordered by the number of pageviews on the site: an ordering that actually compared poorly to random. But, we asked, what if we could find the order that allowed us to generate a sufficient number of concepts by reviewing the least number of pages? ‘Inverse similarity’ selection That led us to this question: can we use machine learning to help pick the most different pages within a theme? If the answer was yes then the concept finding exercise should take less time, as there’d be fewer pages on the same topic to review. As it happens, last year our team did a couple of experiments with natural language processing. Helped by the data team at GDS, we investigated the usefulness of using machine learning to generate taxonomies. This technology now provided us with a new way to tackle the problem in front of us. We had learned what we needed from these past experiments, and successfully applied this knowledge to this newly understood problem. We trialled this approach against the content tagged to the education branch of the taxonomy, and found our new solution performed exactly as we’d hoped. We think the number of pages that need to be read has fallen from 20% to less than 5%, and we have a repeatable process that can be applied to future content themes. Trying it out in anger The pages are currently being used to generate the terms for our transport theme. This is turning out to be really useful. We may integrate this algorithm into any tools we build to support the taxonomy generation process.Find a new flat with Zoopla & Trello2017-07-20T00:00:00+00:002017-07-20T00:00:00+00:00https://www.tijmen.cc/2017/07/20/houseparty<p>Yesterday, I gave a lightning talk at <a href="https://codebar.io/">Codebar</a>. I spoke about the tool I’d built to look for a new flat, using Zoopla and Trello.</p>
<p>You can find <a href="https://github.com/tijmenb/codebar-mini-houseparty/blob/master/slides.pdf">the slides</a>, the <a href="https://github.com/tijmenb/codebar-mini-houseparty">code for the presentation</a> and the <a href="https://github.com/tijmenb/houseparty">original code</a> on GitHub.</p>Tijmen BrommetYesterday, I gave a lightning talk at Codebar. I spoke about the tool I’d built to look for a new flat, using Zoopla and Trello. You can find the slides, the code for the presentation and the original code on GitHub.