Running .Net Core on AWS Lambda

AWS Lambda offers a convenient way of hosting your function on AWS with inherent elasticity and fault tolerance. Many are looking to see their favorite runtime natively supported in the platform. This post talks about how you could do that today, using .Net Core as an example.

Preparing a Workbench

In many cases, your app may require OS specific preparation, like compilation. To ensure compatibility with Lambda’s underlying execution environment, we will be preparing a workbench based on a specific public Linux AMI version. This AMI is the same machine image being used by AWS Lambda.

For our workbench, you can use smaller instance type assuming the code base is small and the compiler do not require alot of resources.

After the instance is launched, login to the instance via ssh

ssh -i your.pem ec2-user@yourec2ipaddress

Next, we’d like to install .Net Core on our instance. The AMI we’re using comes with an older version of libpng than the version required by .Net Core. Furthermore, the included libpng is required by the openjdk, which also included in the AMI. Thus for our purpose, we will be removing both libraries and replace the libpng with the version that .Net Core expects.

Last tested 10/13/2016 against .Net Core 1.0.1

With this out of the way, we can start with the installation of .Net Core.

Preparing your application

At this point, we have a working workbench for our application preparation. We may choose to install git to get our code pulled into our workbench. It’s now time for our preparation, in our case this means compiling our source code.

Our Lambda environment won’t have the required dependencies we just installed on our workbench, so we will need to package them together along with out compiled app.

App Launcher

Our app is going to wake up when the Lambda function is invoked then quickly terminates. Currently, AWS Lambda only supports Python, Node.JS or Java handlers. We will need to execute our .Net Core app from within a supported handler. In this case we will be using a Node.JS based handler:

We can then package our Lambda manually, or using your favorite deployment tool.

Handling Params and Return Values

We kept the launcher above relatively simple to show you the basic principles of the approach. In practice, you may want to extend our launcher to pass params from the event object to our .Net Core app as command line arguments. The same goes for return values, we may want to parse our stdout, which could be a JSON object to something that the Lambda client expects.


In this post, we’ve covered how we can use AWS Lambda to execute apps built on runtime other than Python, Node.JS and Java, specifically we used .Net Core as an example. This approach opens up new possibilities to developers to embed their favorite runtime.

Getting Intellisense and syntax highlighting to work on React JSX on VSCode


At the time of writing, Visual Studio Code (0.10.6) still do not have a perfect support for JSX/React. You can either have intellisense for your javascript or syntax highlighting for jsx but not both.

The awesome vscode community found a clever workaround.

This is how it works

Step 1: Turn off syntax validation on VS Code

We want to make use of VS Code Intellisense, so we’re going to just turn off the javascript syntax validation so that it won’t complain when we have JSX. The good thing is syntax highlighting actually still works without javascript validation.


{ "javascript.validate.enable": false }

Step 2: Use ESLint for error checking

At this point we need something to replace VS Code ability to tell us should we produce an error in our code. ESLint can provide that support.

Let’s install ESLint
npm install eslint --save-dev
npm install eslint-config-airbnb --save-dev

Install ESLint extensions to handle ES6 and React
npm install babel-eslint --save-dev
npm install eslint-plugin-react --save-dev

Remember if you have a global Eslint the plugins have to be available globally too.

.eslintrc.json config
{ "parser": "babel-eslint", "extends": “airbnb" }

Intellisense is now working while JSX isn’t marked as errors.

Configuring SASS transpiler on the IntelliJ IDEA 14 CE

Configuring SASS Transpiler on IntelliJ looks easy. But I couldn’t quite get it to work following those steps. Here are the steps that actually works (tested on IDEA 14.1.5 on a Mac OSX El Capitan):

  1. install sass: sudo gem install sass
  2. go to IntelliJ IDEA | Preferences... | Plugins
  3. click ‘install JetBrains plugins …’.
  4. Search for File Watchers and click on “Install plugin”.
  5. Restart IDEA

add scss file type

  1. go to IntelliJ IDEA | Preferences... | Editor | File Types.Find CSS Files  and add *.scss as the registered patterns

add file watcher

  1. go to IntelliJ IDEA | Preferences... | Tools | File Watchers.
  2. create a new file watcher. Fill as above. Note that this is a project specific file watcher

Alternatively, obviously you can just run a file watcher from the terminal:
sass --watch app/sass:public/stylesheets

Play Framework application layout with custom Java package name

It’s not really obvious how you could use custom Java package namespace in play framework 2.4.x. The official doc simply has a one liner:

the controllers, models and views package name conventions are now just that

But now:

  • do you put the app folder under
  • What about the conf folder? also the public folder?
  • What about sub-projects?
  • Say you got it to work with activator you probably still can’t get IntelliJ stop complaining about ‘package name do not corresspond to path’?

So, I came up with a sample Play 2.4.x Java project that has the bare bone components to show how it could be done.

It showcases:

  1. the correct project folder structure
  2. config referencing modules from a custom package
  3. views referencing layouts and partials from a custom package
  4. views referencing models from a custom package
  5. views referencing helper from a sub-project library
  6. routes referencing controllers from a custom package
  7. build.sbt building solutions and internal library with custom packages

See Play Package Namespacing Example on GitHub

Playframework 2.4.x keeps throwing “Package does not exist”

This drove me crazy and took me hours to resolve.

I kept on getting Package [name] does not exist even though I’ve set up the sbt subproject correctly with aggregate and dependsOn. IntelliJ was happy but play keep on bumming out.

It turns out that when you modify the subproject hierarchy, you would need to terminate the activator run session, and probably best to do a clean, reload and compile before kicking it back on again.

Tested on:
Play Framework 2.4.x

The problem with Apple Music (from a product development perspective)

Apple announced Apple Music in the WWDC 2015 keynote. This is obviously not Apple’s first foray in this product category, started with iTunes itself (the app and the store), Ping, iTunes Match and iTunes Radio. Here are a couple of my thoughts on Apple Music from a product development standpoint.

So much for “the crazy ones, misfits, rebels and troublemakers”.
Apple Music is for the masses.

Apple Music consists of 3 services: streaming ala Spotify, a “backstage pass” fan-artist relationship platform and live Radio . Apple Music will be available in not just Apple’s own ecosystem, but also on Android and Windows (no Web?). The service is priced at $9.99 per person (or $14.99 per family). This is a definitely a clear indication that Apple is looking at Apple Music as a real revenue stream, as opposed to just another “point” towards their hardware ecosystem like Samsung does with Samsung Milk and Nokia did with Nokia Music.  It also makes it clear that it is a volume business. To be successful, Apple needs to focus on mass appeal (and eventually gets to the long tail). It wants to be the modern major label/distributor. This is why Iovine joined Apple.

I would argue that the iTunes brand would have fit in much better to the “mainstream” audience Apple Music is targeting for. Think about iTunes Radio, iTunes Match, iTunes Extra, iTunes Festival. Apple Music lives not only in the same industry but also aiming for the same image – as cool as the latest pop, rock r&b artist at any given time.  Unfortunately an i/I -of anything a little bit passé at this point.

From the outset,
Connect is like Youtube meets Instagram meets iTunes Extra.

Each social platform has its own ethics and evolving culture. Youtube Vlog content are mostly done in a fast pace editing style. On Instagram, you better use the right hashtag if you’re posting a photo from the previous week. Each platform forms its own identity and attracts certain types of content. The clearer that identity and culture of the platform are, the easier for the audience to consume its content. Just by looking at the initial ASMR on a Youtube video, “Ah yes, this is one of those Autonomous Sensory Meridian Response videos“.

Connect is trying to merge and bring all these artist-fan interaction into a single consolidated platform. Will a single platform allows the liberty for artists (and fans) to try out different content format and interactions without confusing the audience. Some artist may produce a high gloss, highly produced, medium duration, behind the scenes footage ala iTunes Extras. Others may choose a Vine/Instagram style 6 to 15 seconds of scrappy silly video messages. Can the two live side by side?

Then of course there’s the other side too, the fan generated content.  From what I can see so far, Connect is an artist-centric platform. A real engagement really comes from a two conversation that’s more than just likes, comment and paying money for the artist work. An artist-fan platform needs to take into consideration fan generated contents from retweets, mashups to full blown covers.

I think it is pretty naive to hope you could bring all these artist-fan interaction into a single consolidated platform.

See what I want is,
the right music at the right time.

Beats 1 is an interesting concept. Iovine said nothing beats  human curation (pun intended). I get that. I do also miss listening to a human voice when listening to music – IMHO there’s an opportunity here to create an app that can intelligently mix podcasts and music.

But then there’s the “live” part of the equation. Beats 1 is live radio streaming. This is at a time when everyone else goes on demand? That part is a bit of a mystery to me. It almost goes back to the Winamp ShoutCast days. Sure, live broadcasting makes sense for news, concerts, sports.  I would personally prefer my music to be played in the “right time” than real time. I think there’s a missed opportunity here to mix the 2 paradigms together, on demand yet human curated through time-shifting or algorithmically triggered “sessions” by Siri’s Proactive-assistance. There’s alot of work involve to get the formula right, something that Apple would excel in, but here instead resorting to the safe traditional option.

In the world of Apple Music, my taste is reduced to 3 stations.

When I listen to radio shows (as a podcast, by the way) – I listen to NPR. As many, I take pride in having a slightly “off-the-beaten-path” taste in music. When the mandate is to appeal to the broadest audience possible, my suspicion is Beats 1 won’t be a station that I listen to. But I’m happy to be proven wrong when the time comes.

The future lies in the upcoming artists.

Apple’s long term goal really rests on the new upcoming artists who would start (and perhaps end) their career within the Apple Music ecosystem. These artists will not need deals, record in their own bedroom, collaborate across the internet and most importantly not signed to a label. At that point the equation changes. Apple gets the lion share of the cut from the materials. No longer they have to pay recording labels an exurban amount of money. Apple gets enough cut to run the business and grow, so do the artists. The question is whether Apple Music would survive to see that time.

UICollectionView with UIKit Dynamics

I’m working on my next iOS app project and was looking into integrating UIKit Dynamics (2D physics engine for UIKit) with UICollectionView. Found this old-ish cool project by Ash Furrow on GitHub. I was thinking of using it and ported into Swift (since then I changed my mind).

Check out the work on GitHub: ronaldwidha/ASHSpringyCollectionView

Thought I’d share the port to the rest of the world. All UI logic and optimization are derived from Ash Furrow’s work.

How to debug WatchKit and iPhone Parent App on XCode

So you made your Watch app, needing a network call, you put the call in the Parent’s app delegate handleWatchKitExtensionRequest. boom, you’re getting error and cannot seem to debug the parent app. ANd here you are.

At first it looks as if you cannot debug a running Watchkit app and Parent app (both on simulator at the same time using XCode 6.3. But actually you could

Step 1

set a breakpoint on the line where you make the


and run the watchkit app


Step 2

As the execution is paused on the specified breakpoint, go to the iPhone simulator and manually launch the parent app.

Step 3

Go back to Xcode, head over to the menu bar and select Debug > Attach to Process > Your parent app . Press resume and you should be hitting the breakpoint on the parent app.


I really hope on XCode 6.4, they would just attach to the parent process automatically.

The Magic of the Apple Watch


I don’t consider myself fashion conscious, far from it. Though like most, I occasionally see a piece of clothing, a pair of shoes or even a camera (the X100) that would make me dream day & night how my life would be so much better owning that very thing.

Apple is very good at crafting such desire for their product. You could totally see yourself be the subject of envy to the people around you, “Is that the latest Apple thing? That looks so cool. Can I see it?”. It happened with the phone, the pad, the air and most definitely with the Watch.


Clouded by the fantasy of owning one before anyone elses made me wonder – is this a reflection of the quality of the product or just the hype.


So, are you getting one?

XCode timeline editor for keyframe based animation

Flash Professional Timeline Editor

Don’t you just miss Flash?

As some of you may know, I am currently building an iOS app with Wita from Design Is Yay!. The app is called Kuko. It’s a sleep trainer app with hooks to Apple Watch. It’s a simple enough app for me to learn the ins and outs of Swift and Core Animation. The first thing I miss about working on this app is that XCode does not have a keyframe/timeline based editor. Interface Builder is far from being one, and as far as I’m concerned, it doesn’t even try to be one.

QuartzCode is the missing piece from XCode. Apple, just buy these guys already!


QuartzCode is the keyframe based timeline editor that we all wish XCode comes built in with. You can do most of the things that you come to expect from a keyframe based editor. Create keyframes before, after. Figure out the tween, motion, opacity, easing. At any point in time let me play, scrub and tweak. The final output is a series of classes you can copy paste into your project.

Already using Swift? not a problem, it speaks the language.

Since it spits out code, it does beg the question if I have to create multiple views for all screen sizes and orientation.

Unlike on a Universal Storyboard with auto layout constraints and sizes, when using Quartzcode we’re really creating all our UI controls programmatically. This does mean we’re hard coding the positions, width & heights of our controls. So do we need to have one view for each one of the devices …errr… all those screen sizes, really?

Fear not.

QuartzCode has an option named Relative Frame which converts all the layout and sizes value as a relative number against the screen width. You could then amend manually, if you like, to either scale, fit to width or do your own fancy anchoring calculation to work out the maths of the final value.

QuartzCode is nearly perfect. But the problem stems from the fact that it isn’t a feature inside XCode, but a separate app.

The biggest drawback of using Quartz Code for 2 months now is that it forces me to modify my UI element on QuartzCode, even though I already have them in XCode. It’s somewhat of a pain, but to me it’s still worthwhile having the preview capability than just guess, compile and pray that I got the coordinates right.

Another issue I encounter is the awkward workflow when working with different animation states (e.g. entry, jump, walk, run). QuartzCode does not support multiple timeline. As a work around, I have to create multiple project for the different animation state and really just organize them by carefully naming the files in a certain way (e.g. kuko-entry, kuko-jump, kuko-walk).

What happens when you have changes that affect across all those different files? Yes, it’s a manual painful process.

Just like any other development/design tool, this app looks deceivingly simple yet layered like an onion.

No I don’t mean that I have bad breath after using QuartzCode. What I meant to say is that QuartzCode is quite a comprehensive little tool. I continuously stumbled on new feature every time I use it (I just saw a tick box to reverse all animation. woot). This post haven’t touched the full potential of what one could do with it. If you’re interested to use it I encourage you to check out the examples page. If you have higher Cocoa Touch/Core Animation mileage, I’m sure you’ll notice more things right away. I’ll share my thoughts as I use it more and more.

Suffice to say that if you’re making money in the App Store, QuartzCode is definitely one of the tools that you want ot have under your tool belt. Unfortunately I think people like me who is starting out would probably find more use than the Pros, and the $89.99 price tag can be of a hinderance. I would suggest to drop the developer team an email, and maybe they can hook you up with a promotion. You never know.