Using Linux through Visual Studio Code

08 April 2018

Here’s how you can set the Windows Subsystem for Linux to be the default terminal inside VS Code. This allows for a very simple way to launch a bash prompt:

Ctrl + `

The first thing to do is install the Windows Subsystem for LInux, and then your distro of choice from the Windows Store. There are tons of guides for this, so I’m not going into the details - just search “How to install Windows Subsystem for Linux” and pick a guide that works for you.

Now that you have Linux installed and running, edit VS Code’s preferences. Use the keyboard shortcut:

Ctrl+,

or click on File > Preferences > Setting

Use this line to set the default terminal

"terminal.integrated.shell.windows": "C:\\Windows\\System32\\bash.exe",

Once this is done, save and restart VS Code. Now just use the Ctrl+` shortcut and VS Code will launch an intergated bash prompt with the current working directory set to the folder you have open.

If you want to change the CWD, use this line

"terminal.integrated.cwd": "C:\\Users\\$USERNAME\\AppData\\Local\\Packages\\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\\LocalState\\rootfs"

I set mine to the root folder in bash. Your username and the package name will be different, of course. You can set the path to anything you like.

This lets me write my blog posts, and then update my site from inside VS Code. Makes things nice and easy.

Automated Deploy Pipeline to Netlify

29 March 2018

Through the magic of simple bash scripting, I now have a very easy deployment pipeline for my website. I wanted to do this all in Visual Studio Team Services, but as far as I can tell, I’ll need to register a custom Linux deployment agent (aka a Linux VM running somewhere).

Since I don’t want to do that, I used the Windows Subsystem for Linux, bash scripting, and the Netlify API to automate my deploys. This is how it works.

I write my blog post in my text editor. I save it, and push the commit to VSTS - this is mostly just for backup, since the deploy is all happening on my machine. After pushing to VSTS, I run this simple script in WSL:

!#/bin/bash
jekyll build -s $PATH_TO_SITE/AdityaNag.com > /dev/null 2>&1 
zip -r site.zip _site > /dev/null 2>&1
curl -H "Content-Type: application/zip" -H "Authorization: Bearer <OAUTH_ID>"--data-binary "@site.zip" https://api.netlify.com/api/v1/sites/adityanag.netlify.com/deploys > /dev/null 2>&1

This generates the site, zips it up, and deploys it to Netlify. The entire process takes between 25-40 seconds (my site isn’t very big). I could shrink that further by doing incremental builds or something, but I’m ok with this for now.

Finally, I use the email notification feature in Netlify to notify me when the build is sucessfully deployed.

I wish VSTS would provide a Linux agent that has Jekyll built in. I suppose I could always switch to Hugo, but that’s a bigger project - maybe a weekend project!

Moving to Visual Studio Team Services

27 March 2018

Github pages are great, but they have one flaw - you have to have a public repo. This meant that I couldn’t really write drafts, or make future edits without the world knowing. I found that this was holding me back from updating my blog.

So today, I moved my site’s repo from Github to Visual Studio Team Services. It’s free, with private repos. Unfortunately, VSTS doesn’t really have a good way to deploy to Netlify (Free static hosting? Yes please), without setting up a convoluted build pipeline that I frankly don’t have the patience to do.

My interim solution is very simple. Write my posts on my local machines, and then use Jekyll via the Windows Subsystem for Linux to generate the static site. After that, simply zip it up and manually drag and drop into Netlify.

I know I can automate this, and I probably will when it starts getting tedious, but for now, it’s good enough. From finishing a new post to publishing takes me less than a minute.

Sometimes the easy solutions are the best.

My own personal cloud

01 August 2017

Last Saturday, I colocated my server. This server has been living under the guest room bed for the past year, being used for everything from test labs to providing NAS backup for my devices. It worked really well, letting me spin up VMs at will without paying Amazon or Microsoft for the privilege. However, we just moved to a new apartment which is smaller and doesn’t have FIOS, and now I have no good place for it.

I thought about selling the server, but that doesn’t make sense as I do need something that acts as a testbed. And so I decided to colocate the server. I also looked at getting a dedicated server but those cost a LOT of money for the config I have (96 GB RAM, 8TB HDD, 24 cores) - I’d be paying hundreds of dollars a month for a similarly powerful server.

Never having coloed before, I didn’t quite know what to expect. I didn’t want to ship the server, so it would have to be a local colo provider, and I started searching and calling around. After getting some eye-watering quotes, I finally found a provider that understood my homelab-ish needs and gave me what I consider a quite reasonable deal.

I’m paying under $90 for a single 2U server, 2A of redundant power, 1 Gbps unmetered bandwidth, and a /29. There are a few caveats though - I told the provider that I’m running a homelab with minimal bandiwdth needs, so he agreed to give me a high speed connection with no metering. They’ll monitor the connection for three months and if I go past what they consider reasonable (more than 5 Tb, I think he said), they’ll let me know and we can talk about moving to a different plan. The same goes for power - if my server is routinely pulling down more than 3A, I’ll have to pay more.

This really worked out well for me. I know I’m not ever going to hit those bandwidth or power limits, so I can effectively run this however I want. Having public IPs is really nice too, giving me the ability to run some services on my own hardware rather than pay Digital Ocean - I’m saving $15 a month there, so my colo bill comes down to $75.

I have setup an IPSEC tunnel to the server so it feels just like it’s on the local LAN. Latency is around 30ms, and Remote Desktop and SSH work absolutely smoothly. I really should have coloed years ago. I’m saving $20 a month on power, $15 for my DO server which I no longer need, so for a sumtotal of $55 a month, I get a super high-speed connection to the net, my own cloud, and the ability to run it however I want.

Summer is finally here

19 May 2017

It’s almost the end of May here in Massachusetts, and it’s finally starting to get warmer. And I’ve been learning Xamarin Forms, though I’m toying with the idea of just going straight to native. I have invested a couple of years into C# though, so it’s hard to move away from right now. I’m hoping that Xamarin Forms works out well for me.

I’ve created a basic CRUD app in Xamarin Forms, and it works well on my iPhone and my Android tablet. UWP works as well, of course, but I’m not focused on UWP till after the Fall Creators Update. I’m looking forward to learning some Fluent Design. I tried getting started with the SDK, but they hadn’t released all the bits when I looked. I’ll probably look again next month.

The Windows Subsystem for Linux Changes Everything

13 April 2017

I used to run my site on Wordpress. Roughly two years ago, I moved it to Jekyll. I did this as I was tired of managing Wordpress, and didn’t want to deal with running a Database driven site anymore. Also, I wanted to host on Github, removing the hassle of running a server.

It’s worked well for the most part, but I’ve always had an issue with running Jekyll on my Laptop. I like to run Windows (though I have a Mac, and also run Linux), and it was a huge pain to get Jekyll running smoothly on Windows. So much so that I gave up. On Ubuntu, installing Jekyll is as simple as apt install jekyll, and I would use my Linux VM if I really wanted to test something locally. The Mac is easier than Windows, but I much prefer apt to homebrew, and I find it easier to get good documentation for Linux tools rather than the same tools on a Mac.

I started looking at the Windows Subsystem for Linux last year, and immediately realized that this is a game changer. Y’see, while I love Linux on the server, and all the lovely development tools and workflow, I’ve never really liked using it as a desktop. Back in the day, I ran purely on Linux for two years, but eventually gave up cause I got tired of constantly tinkering with my machine just to keep basic stuff running (editing X conf files for multi-monitor support..argh). So I ended up with a split workflow. Windows/OS X for daily activities and general computing, and Linux on the server.

The Windows Subsystem for Linux (WSL) is brilliant cause it allows me to use Windows as a desktop, while still covering my Linux needs. Take this blog, for example. Here’s what I did to get it running on my local machine

  1. Install WSL
  2. Git clone repo with one command git clone reponame
  3. apt install jekyll
  4. jekyll serve

That’s it. Now I’m writing this blog post on Windows with Visual Studio Code, while Jekyll runs in the background auto-generating the site everytime I hit save. Chrome is open, I hit F5, and everything updates. It’s magical!

Inotify works, so Jekyll (running under Linux via WSL) notices that I’ve edited a file (in VS Code running under Windows) and it regenerates the site. THIS is how it should be.

WSL in the new Creators Update covers 100% of my needs. I can run nginx, mysql, gdb, gcc, jekyll, node, .NET Core (which is fun, since I’m running .NET code on Linux on Windows). I can use Visual Studio to debug a Linux applicaiton running on my local machine. I can open a linux file in a windows app and a windows file in a linux app. I can use bash scripting and the power of sed, and awk, and grep and all the lovely bash tools to parse any file on my system. It’s truly the best of both worlds for me. Native SSH too.

Microsoft has ensured that my next laptop purchase will definitely be a Windows laptop, and not a Mac (even as I write this on a 2015 Retina MBP, running Windows via VMware Fusion, with Linux on Windows on a Mac… what a time to be alive!).

WSL is a game-changer and I can only imagine how much better it’s going to get.

2017 is the year of Linux on a Desktop - but the Desktop is running Windows. Think about that, and marvel.

I built a UWP App!

08 February 2017

I’ve been learning the Universal Windows Platform for the past few weeks. This is all part of my new year’s resolution to learn new stuff.

So, how’s it going? I’m glad you asked!

Bookshelf

UWP Bookshelf

This is early days, but I have a bunch of stuff working. The app searches Google Books, and pulls images from Amazon if Google doesn’t have any images. I’m parsing JSON, making async HTTPclient calls, the back button is working, text can be expanded and collapsed, and the app is responsive.

I’ve learnt a lot and I’m happy with my progress so far. I have a list of features I want to add to this app, and then I intend to submit it to the Windows App Store. Think integration with GoodReads, saving books to your library, and so on.

UWP is a pleasure to use, and for hobbyists, easy to pick up. I’m not a software developer by trade, and I’m not planning (yet) to build very complex apps. I really like the fact that I can build this app, and it runs without any modification on my Raspberry Pi running Windows 10 IOT. It works on Windows Mobile too, but we all know how that’s turned out.

Microsoft is looking good to me. The promise of UWP, the moves they’re making with .Net Core, the Azure Strategy - this is all good news. Of course, maybe I don’t know anything and the Windows Desktop is dead and I’m learning a dying language and framework.. but I’ll take that chance.

My brain is happy cause I’m pushing it to learn new stuff. I’m building tangible apps that are actually useful (for me, but that counts).

The code for my Bookshelf App is rather raw, but once I polish it up a bit I’ll probably push it to Github. I keep my development repos on VisualStudio.com because I can have private repos and don’t need to worry about leaking credentials.

Keep on learning :)

How I secure my IoT devices

03 January 2017

I thought a quick post on securing IoT devices might be useful to some people. The security of the Internet of Things is a hot topic these days, and it’s something you have to think about before adding the latest little gadget to your home network.

In my case, I have a couple of D-Link DSP-W110 Smart Plugs. These are neat little devices that I use to turn on a few lamps that aren’t easily accessible. At first, I had them connected to my home WiFi, but I grew uncomfortable with the idea that an attacker could break into these devices (they talk to the Internet, after all) and be inside my home network without my ever noticing it.

To mitigate against this risk, I have created a new dedicated WiFi network for all my IoT devices. Here’s how it works:

I have four main network devices

  1. A dedicated router. FIOS plugs into this, and everything else on the network is behind this device. The router is an enterprise level router with lots of security features. I’m not going to name it, but you can assume it’s something like PfSense, but not exactly.
  2. A managed switch that supports VLANs. Again, enterprise grade.
  3. Main WiFi AP for all my personal devices. Cheap consumer router
  4. Dedicated WiFi AP for IoT devices. More expensive consumer router

The LAN side of the router plugs into the managed switch on port 8. My IoT Wifi AP plugs into port 1. Port 1 and Port 8 are on a VLAN, so they can only talk to each other.

My home wifi AP plugs into port 7. A couple of hardwired devices go onto ports 2-6. Ports 2-8 are on a VLAN.

The IoT devices can talk to each other, and my router. They cannot communicate with any other device on my LAN. The only way to do that would be to break through the router.. and I have various firewall rules and other security setup to mitigate that risk. I also throttle the IoT network to less than 512 Kbps upstream. This is more than enough for the devices, and slow enough to really make DDos attacks less worthwhile as compared to my full fat 150 Mbps symmetric FIOS connection.

An attacker would now need to break into the D-Link Smart Plug. Let’s say this happens via a vulnerability, and the attacker is now running a root shell on the plug. He can now scan the network, and he will see

  1. The dedicated IoT wifi AP
  2. Any other IoT devices (smart plugs, some Arduinos, etc)
  3. The VLAN switch (enterprise grade)
  4. The router (enterprise grade)

Breaking into the wifi router won’t really help much. To get access to sensitive data, he’d have to break into the switch or the router. Both of which are far more difficult to break into as compared to a random IoT device. Since the upload speed is heavily throttled, he can’t really use the smart plug as a jumping off point to do too much damage to external networks. And I have rules on the router that will detect (in theory) unusual activity on the IoT network. I’m aware that a dedicated hacker can break through all this, but I’m not mitigating against that level of attacker (I can’t, and neither can most people). However, a random root shell vulnerability on an IoT device won’t let anyone get access to my LAN and all my private network traffic.

So there you have it. You might think this is really expensive, but it’s really not. I picked up the router and the switch for around $150 used on eBay. The IoT AP was $20. The main AP was $100. Yes, you need to understand networking and it’s complicated if you have never done this before - but I think it’s absolutely worth it for anyone who’s moderately technically inclined.

A New Year to Learn New Stuff

03 January 2017

It’s 2017 now, and it’s a new year. I’m looking forward to this year - there’s lots to do, and lots to learn. At work, I’m going to be working on the kinds of projects you only get when two giant companies merge. Some of these projects are completely sui generis, and I’m excited to do stuff that’s entirely new.

At home, my wife and I have lots of plans for this year. More about those plans as they happen through the year.

And personally, I’m going to keep on learning new stuff. The past couple of years I’ve done a good job of learning C# and the .NET framework, and I want to go on with that. I’m now at the point where I can read strange code and it (mostly) makes sense to me. That’s exciting. I have a fair understanding of interfaces and async code. ASP.NET MVC isn’t unfathomable. I’m pleased with the progress I made, and I’m going to stick with it. I might also start brushing up my Javascript/Typescript. I’m even looking into learning some formal electronics - I have learnt ad-hoc, but now I want to get into the theory and practice of electronics so that I can do more with my Arduino and Raspberry Pi devices.

And oh yeah, I have to migrate my repos from BitBucket to GitHub. I use BitBucket cause it let me have unlimited private repos, but everyone else is on Github. So I move code over after scrubbing it of any keys, personal information, and the like.

So here’s to 2017, and to learning new stuff.

The Perfect Umbraco Development Workflow for Azure

21 November 2016

I’ve spent a month using Umbraco now, and I really like it. I’ve used it to build a website for a small business. The extensibility is the best part, along with the ability to edit the project in Visual Studio. Adding custom controllers is easy, and I’ve built a few to do stuff like contact forms, generate blog lists, validate user input, and so on.

In this blog post, I want to talk about how I set up my workflow in Azure. I spent a lot of time trying to figure this out, and I hope someone else finds this useful.

Storing Code on Visual Studio Online

The first thing I did was set up my project on VSO. The service is free, and I like that I can do code versioning (with Git), project management, and release management. It really is an end-to-end pipeline, and even though I don’t use all the advanaced features, it’s a great place to host my projects. And did I mention it’s free?

This part is optional, really. You can use Github, BitBucket, Gitlab, anything you like. Just make sure you’re using a version control system that works for you.

A three tier workflow

Tier 1

The production website is hosted on Azure, under the S1 App Service Plan. The site is linked to a MSSQL DB, also running in Azure.

Tier 2

I’m using the Deployment Slots feature to run my staging environment. The stage environment is linked to a separate MSSQL DB, also running in Azure.

Tier 3

My local dev environment. This is a Visual Studio Project that runs on my computer. This is using a Localhost DB as the backing store.

My daily workflow consists of editing in VS. I build controllers, edit templates, publish test pages and so on all on my local machine. The content is all test, lorem ipsum and random images, that sort of thing.

Once I’m happy with my work, I check in my changes. After that, I do one of two things

  1. Run the project locally and use Courier to migrate DB specific changes
  2. Build the project and publish it to stage

Pushing changes with Courier

I paid $100 for Courier and it’s totally worth it for me. It makes it really easy to push templates, document types, media types, and everything else. The only thing Courier doesn’t do is push updated DLLs, and that’s what the next step is for.

Building & Publishing the project

I do this to push any changes to my DLL files. This is usually only needed when I have edited the controllers, or created a new model. Also, if I edit the web.config. I’m running Web.config transforms so I can safely publish to Stage or production.

Once everything is on stage, I test with real content. Stage is a mirror of production, so I can make sure everything looks good. When I’m satisfied, I use Courier to push to Prod, and (if necessary), I build and publish the project to Prod from Visual Studio.

If I’m only creating content (new blog post, new page, etc) and not touching the templates/doc types, I use stage. That’s my primary CMS - I’ll write the blog post, add the image, preview it. When I’m satisfied with the content, I use Courier to push to Prod. I rarely (almost never) have to actually log into the Prod Umbraco instance.

I hope this makes sense. If you have any questions, ask me via Twitter, or email me. I don’t have comments on my blog - you can read why here