Deploy then merge, each day

I favour master branch development, but as git and github become even more and more successful, everyone favour PR-driven work. Don’t get me wrong. I love git and github, and can’t think of any other work environment without them.

So my compromise is deploy my PR to the integration environment at least once a day and merge the change to the master branch. It’s a compromised version of continuous integration, or “throttled continuous integration”

What are the benefits? I think many, but some of them I can think of now are

  • With smaller PR, you can have more thorough code review. Any discussion of code will start earlier and  you have more time to think and to improve it. The merge or rebase will be trivial as you have smaller scope. You wouldn’t have as many as conflict as before.
  • By pushing your code into master branch more often, you have each other’s code as early as possible. So, when you deploy your branch to a testing environment, it wouldn’t wipe out your colleagues’ code he might be testing. No competition for adhoc deployment is necessary.
  • Basically all the benefits of continuous integration including the above.

What do you think?

Validate tenant with middleware / global filter, ASP.NET Core 2

Tenant is a term that describes the country the application runs in at Just Eat. For example, UK, IE (Ireland) are two different tenants. I’ve created an api that accepts a scheduled event. The endpoint is like this

POST api/v1/uk/events

At the moment, we’ll only support 7 different tenants. The api should return BadRequest if the country code doesn’t fall on those 7. It fits well for ASP.NET Core Middleware, middleware gets executed on every request.

What’s middleware? Middleware is software that’s assembled into an application pipeline to handle requests and responses

For implementation, I need to do

  1. Create an extension method, app.UseRequireTenantValidation() to hook up the middleware
  2. Write the middleware
  3. Use the middleware in Startup class

1. Extension method

public static class ValidateTenantOnRequestExtensions
    public static IApplicationBuilder UseValidateTenantOnRequest(
      this IApplicationBuilder builder)
        return builder.UseMiddleware();

2. middleware

public class ValidateTenantOnRequestMiddleware
    private readonly RequestDelegate _next;

    public ValidateTenantOnRequestMiddleware(
      RequestDelegate next)
        _next = next;

    public async Task Invoke(HttpContext context)
        if (context.GetRouteData().Values.ContainsKey("tenant"))
            var tenantStr = context.GetRouteValue("tenant") 
              as string;
            Tenant _;
            var success = Enum.TryParse(tenantStr, true, out _);
            if (!success)
                context.Response.StatusCode = 
                  (int) HttpStatusCode.BadRequest;
                await context.Response.WriteAsync(
                  new Error(new UnsupportedTenantException(tenantStr)).Message);

        await _next(context);

However, it turned out that global filter has better support in terms of routing. app.AddMVC() populate route data and it happens almost at the latest on the pipeline. So, GetRouteData() at the middleware level returns null. Ok. I learn while I go. Let’s change the code to filter.


public class TenantRequiredFilterAttribute : Attribute, IResourceFilter
    public void OnResourceExecuting(ResourceExecutingContext context)
        if (context.RouteData.Values.ContainsKey("tenant"))
            var tenantStr = context.RouteData.Values["tenant"] 
              as string;
            Tenant _;
            var success = Enum.TryParse(tenantStr, true, out _);
            if (!success)
                context.Result = new ObjectResult( 
                  new Error(new UnsupportedTenantException(tenantStr)))
                    StatusCode = 400

    public void OnResourceExecuted(ResourceExecutedContext context)

The code is almost the same, except that now the clas inherits IResourceFilter.
You can add the filter to the pipeline in Startup.

services.AddMvc(options => { 
    options.Filters.Add(); });

Forward your email with Amazon SES

My partner works as acupuncturist. As self-employed person, she’s got a website, and recently, the hosting company was bought out, leaving her website down. I’ve moved her website to Amazon S3. Everything worked well, except her contact email address,

SES is not supported on London Region, I chose Ireland, EU-WEST-1.

Create a rule.

  1. Add recipient.
  2. In Actions, first choose S3 and select the bucket you want to store the email
    1. I’ve created directory inside the bucket, “emails”. Put emails as Object key prefix
  3. In the next page, I’ve put info-rule as rule name. Then clicked on “Create Rule”.

I got an error, “Could not write to bucket: xxxx-xxxx-email” It’s because I didn’t set the policy in S3 bucket to allow SES access. I followed a very helpful instruction from You need to know your account Id

Now, let’s send an email to test if the mail goes into the S3 bucket.

5 minutes later, the email still didn’t appear. What did I do wrong? I’m reading SES instruction again.

Interestingly, 10 minutes later, I’ve got an email from Amazon, saying “Please note that the rule that you configured to deliver emails to this S3 bucket is only valid if the entire setup process is successful. …” Hmm, what does it mean by “entire setup process”?

  1. Verify your domain: O
  2. Configure in Route 53: O
  3. Set S3 bucket name: O
  4. Set Object Key Prefix: X.

Ah I’ve missed setting the folder name as Object Key Prefix. I was setting it and writing about it at the same time. Also my colleagues were chatting loudly behind me, so I must have been distracted.

After a few more failed retries, I finally set it up successfully. The cause for the failure was that I didn’t put the name on Route 53 record set correctly. The name should be the same as the domain. In my case, the name should have been “” Before it was “”

Lambda …

I’ve created a lambda function, but failed to set it on SES rule set. It seems SES doesn’t have enough permission to access the function.

After while … It turned out that I had SES on eu-west-1 and my lambda was on eu-west-2. SES rule set wasn’t able to show the lambda function. So I’ve created a function eu-west-1. It enabled me to select the function on rule set and AWS console kindly added the right permission to the lambda.

It all works now. To summarise the steps I went through

  1. Create a lambda with ses-forwarder.
  2. Verify domain on SES. Create MX record on Route 53
  3. Verify your email address on SES
  4. Create a rule set that SES receives emails to your domain. The rule set will have actions, 1) S3 and 2) Lambda
  5. Test



Test Driven Reading

“Read your programming book as normal. When you get to a code sample, read it over

Then close the book.

Then try to type it up.”


I’d like to call it TDR – Test Driven Reading

개인적으로 책을 읽으면서 새로운 언어를 배우거나 Framework을 배우는게 쉽지는 않았다. 처음에는 excited되어 읽다가도 끝까지 읽는게 쉽지 않았고, 또 읽고 나서도 막상 먼가 만들어 보려고 하면 내용이 잘 기억나지도 않았고. 그리고 읽는 게 생각보다 진도가 빨리 나가지 않으니, 학습의 속도가 느리니 답답하기도 하고.

비디오 강좌는 좀 더 나았다. 대부분 비디오들이 3시간을 넘지 않는 한에서 주제를 처음부터 끝까지 다루니, 아무래도 학습의 속도가 빠르게 느껴지기도 했지만, 나보다 그 분야의 전문가가 어떻게 문제를 접근하고, 어떤식으로 코딩을 하는지, 어떤 툴을 사용하고, 어떤 mental model을 이용하는지에 대해 알 수 있어 좋았다.

하지만 역시 비디오 강좌도 따라 갈때는 뭔가 아는 것 같은데, 막상 하려면 뭔가 잘 모르는 문제가…

아래 친구가 제시하는 방법은 고전적인 빈칸 채우기.
군데 군데 힌트를 만들어 놓고 스스로 빈칸을 채우면서 – 이 경우는 코딩 – 자신이 무얼 모르닌지 발견하면서 배워 나가도록.

학습의 시작은 먼저 자신이 뭘 모르는지 아는 것인것 같다.

C# in functional flavour

Functional programming can make your head explode. This book stitches it back together.

Daniel Marbach, Particular Software

C# is often the default language for .NET developers and has embraced and added more and more functional elements over time. I know F# should be the choice for serious functional programming, but C# can be a good option when your projects are already written in C#.
C# is more functional than ever with the release of C# 7. Another really good book I came across on Safari Books online. This book is about functional programming, not just a few tips and tricks to imitate F# or Haskell.
F# is the future on .NET, but often you are paid to write C#. By writing C# in functional way, you will stay relevant and get paid. You can bake the cake and eat it!
In that context, Functional Programming in C# from Manning Publication (,  is a very good book for any C# devs who want to start functional programming without changing their job. With its many practical examples, the book is written for proficient C# programmers with no prior FP experience. It will give you an awesome new perspective.

All the code examples are on Github,

The author is very good at explaining the definitions of functional concepts and jargons. Especially I liked his way of introducing functor and monad with maps and bind.

Enrico Buonanno’s online presense:



Command-Line app with Node.js

JavaScript has grown massively recently with the introduction of Node.js, and it serves as a language for Front-End, Server-side api, desktop app, and even for mobile app. It’s a kind of universal language every platform understands.

So, it’s no wonder that you build command-line tools with node.js. Recently, at Just-Eat, I had to upload a custom ZenDesk app via ZenDesk api and wanted to build a script that would do the job. So my choice was node.js with command.js.

Packages you need

With command.js, you can build command-line arguments in a very clear way.

const program = require('commander');
const request = require('request');
const fs = require('fs');

console.log('ex) node zen i <subdomain> <token>\n');

  .description('ZenDial Installer');

  .command('install <subdomain> <token>')
  .description('to install ZenDial')
  .action((subdomain, token) => {
    install({subdomain, token});

One tricky bit was to add basic authentication with formData. I’ve spend 20-30 minutes to figure it out. It turned out that I had to specify the endpoint url in {} object.

const install = (params) => {

  const user = '';
  const password = params.token;
  const fileStream = fs.createReadStream('');

  console.log('uploading the file...'){
    url: 'https://' + params.subdomain + '',
    formData: { uploaded_data: fileStream },
    auth: { 'username': user, 'password': password }
  }, function(err, res, body) {
    if (err) {
      console.error('upload failed:', err);

    console.log('file uploaded')

    const uploadId = JSON.parse(body).id;
    console.log('upload id: ' + uploadId){
      url: 'https://' + params.subdomain + '',
      form: {
        name: 'Zendial v2',
        short_description: 'Zendesk-to-liveops integration app',
        upload_id: uploadId
      json: true,
      auth: { 'username': user, 'password': password }
    }, function (err, res, body) {
        if (err) console.log(err);