<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>AdityaNag.com</title>
    <description>Aditya Nag's personal website. All content is copyright Aditya Nag, and is in no way intended to reflect the  opinion of my employer.
</description>
    <link>https://adityanag.com/</link>
    <atom:link href="https://adityanag.com/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Thu, 21 Jan 2021 00:11:51 -0500</pubDate>
    <lastBuildDate>Thu, 21 Jan 2021 00:11:51 -0500</lastBuildDate>
    <generator>Jekyll v3.8.6</generator>
    
      <item>
        <title>Should all AI Systems be Explainable?</title>
        <description>&lt;p&gt;AI systems are increasingly making important decisions for ever-increasing numbers of people. The decisions these systems make can literally change people’s lives – and as AI systems become more entwined into real-world systems, the decisions they make need to be trustable.&lt;/p&gt;

&lt;p&gt;The use of Black-box AI systems to make these decisions brings up basic questions of public trust. In this note, we’ll examine ways to mitigate these issues without losing the many benefits of using AI systems.&lt;/p&gt;

&lt;h2 id=&quot;types-of-black-box-systems&quot;&gt;Types of Black-box Systems&lt;/h2&gt;

&lt;p&gt;There are two kinds of black-box AI systems.&lt;/p&gt;

&lt;p&gt;The first is where the AI system is proprietary. The workings of the system are known but are held as a trade secret. The appropriate use of policy (corporate or political) can build trust in such systems, and we can consider this a solved problem that requires only the formulation of a good disclosure policy.&lt;/p&gt;

&lt;p&gt;The second kind of black-box AI system is one that is too complicated for any human to comprehend, and the decisions made by these systems cannot be fully explained - even by the creators of the system.&lt;/p&gt;

&lt;p&gt;Even if the decisions made by such an AI system are generally good, a society or business where important decisions are made by a mysterious entity that cannot truly be held to account runs contrary to democratic and corporate ideals of transparency and accountability.&lt;/p&gt;

&lt;p&gt;AI Explainability has emerged as a major area of research that is attempting to answer challenging questions posed by the widespread deployment of the second type of black-box AI system.&lt;/p&gt;

&lt;h2 id=&quot;explainability-explained&quot;&gt;Explainability Explained&lt;/h2&gt;
&lt;p&gt;An Explainable AI System is a system where the decision made by the AI can be explained to a wide range of users, and these users accept the explanation as reasonable, correct, and timely.&lt;/p&gt;

&lt;p&gt;Before we go further, however, we should answer the fundamental question of why one should even care about explainability.&lt;/p&gt;

&lt;p&gt;Let’s say you get a letter tomorrow that tells you your auto insurance rates are going up by 30%. If the letter does not provide an explanation, or provides an inadequate explanation, you are going to call your provider and you’re going to demand an explanation.&lt;/p&gt;

&lt;p&gt;Depending on what they tell you, you can then do things; you move to another provider, you agree to pay the increased rates, or perhaps you negotiate a new rate.&lt;/p&gt;

&lt;p&gt;Thus, explanations are solicited by humans for everyday or “local” events to help understand why something happened or is about to happen. An explanation empowers people to take actions that can lead to favorable outcomes for themselves.&lt;/p&gt;

&lt;h2 id=&quot;explainability-is-a-basis-for-trust&quot;&gt;Explainability is a basis for Trust&lt;/h2&gt;

&lt;p&gt;The lack of an explanation is funny when the AI gives you a head scratcher of a song recommendation, but similar AIs can be tasked with picking the right candidate for a job, or deciding who gets parole. These decisions demand an explanation, and governments across the world are starting to codify this demand into law.&lt;/p&gt;

&lt;p&gt;The European Union came close to implementing a binding “Right to Explanation” into the General Data Protection Regulations (GDPR), but eventually published a non-binding version into law in 2016.&lt;/p&gt;

&lt;p&gt;It is reasonable to assume that future laws will contain stronger versions of similar rights, and businesses need to be prepared to handle increasing demands for transparency and explainability from their customers as well as the government.&lt;/p&gt;

&lt;p&gt;There are many benefits to having the explanation for an AI’s decision be generally accepted as reasonable and correct:  User confidence and trust in the system will grow; there is a built-in safeguard against accusations of bias; the system can be shown to meet regulatory standards or policy requirements; and the developers of the system can continue to improve its performance.&lt;/p&gt;

&lt;h2 id=&quot;building-explainable-ai-systems&quot;&gt;Building Explainable AI Systems&lt;/h2&gt;

&lt;p&gt;There are various ways to build Explainable AI Systems. Three commonly used methods are listed below, but researchers are constantly working on creating new solutions.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Self-explainable models – the model itself is the provided explanation. This is the most interpretable and easily understood approach, and eliminates the issues associated with black-box models.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Global Explainable AI algorithms – this an approach that treats an AI algorithm as a black-box that can be queried to produce a model that explains the algorithm. This is not as good as a self-explainable model, but does allow for a higher degree of confidence in the decisions made by the system.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Per-Decision Explainable AI algorithms – these algorithms take both a black-box model that can be queried, and a single decision of this model and explains why the model made that particular decision.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Another way of verifying the algorithm comes through the use of counterfactuals – “what if” questions that can contrast multiple scenarios. For example, if an AI system takes an input of “Play a pop song from the 2000s” and responds with a certain decision, the response to the counterfactual “What if I ask for a rock song from the 2000s” can give us a better understanding of the inner workings of the system&lt;/p&gt;

&lt;h2 id=&quot;should-all-ai-systems-be-explainable&quot;&gt;Should all AI Systems be Explainable?&lt;/h2&gt;

&lt;p&gt;Given that it is apparently possible to build explainable AI systems, should all AI systems then be explainable?&lt;/p&gt;

&lt;p&gt;On the face of it, this would seem like a reasonable thing to do. After all, can’t human beings explain all the decisions that they make?&lt;/p&gt;

&lt;p&gt;Or can they?&lt;/p&gt;

&lt;p&gt;As it turns out, human-produced explanations can be quite unreliable. Even more surprisingly, it turns out that for certain kinds of decisions, forcing a person to provide an explanation actually lowers the quality of the decision!&lt;/p&gt;

&lt;p&gt;Gaining expertise in a subject makes the decision-making process increasingly unconscious and automatic, and forcing an explanation interferes with this automatic process.&lt;/p&gt;

&lt;p&gt;Elite athletes are an obvious example – Roger Federer instinctively knows where to hit a tennis ball to win the point, but if he started to think about it as he hit it, he’d probably make a mess of it (or not, but then he’s Roger Federer).&lt;/p&gt;

&lt;p&gt;This is an obvious analogue to certain black-box AI systems that appear to be delivering amazing results, and forcing all such systems to be explainable may result in losing the significant value of these systems.&lt;/p&gt;

&lt;p&gt;The fundamental issue causing unease is the lack of accountability. It is not possible to hold an AI system accountable for a bad decision in the same way that we can hold a human being accountable. In addition, punitive structures that work with humans cannot be made to work with AI systems.&lt;/p&gt;

&lt;p&gt;Human beings constantly self-regulate against a complex background of social and moral norms, business rules, and civil and criminal laws. An AI may be built to try to self-regulate in a similar manner, but if it fails there are no real consequences to that failure.&lt;/p&gt;

&lt;p&gt;Trying to hold the creators of the system accountable is unjust on the face of it, since they themselves do not know why their system behaves the way it does after a certain point.&lt;/p&gt;

&lt;h2 id=&quot;when-is-explainability-necessary&quot;&gt;When is Explainability necessary?&lt;/h2&gt;
&lt;p&gt;Since we are primarily concerned with the lack of accountability, it would appear that any system where the decision is important enough to demand accountability needs to be explainable.&lt;/p&gt;

&lt;p&gt;Systems that can make life-or-death decisions, or decisions where people’s lives can be affected in serious ways must make explainable decisions.&lt;/p&gt;

&lt;p&gt;This can be regulated through policy (insurance companies in the US must be able to explain how they arrived at a rate decision) or will end up being regulated through social forces (the pushback against self-driving cars and social media engagement algorithms). There is little appetite for a black-box AI to make decisions of this nature.&lt;/p&gt;

&lt;p&gt;However, there are many applications where AI systems may not need to be formally explainable. Many industries have decision making systems that don’t need perfect reproducibility or explainability.&lt;/p&gt;

&lt;p&gt;For example, think about using AI to regulate the HVAC system for a large office building. As long as the temperature and humidity are maintained within certain norms, people are unlikely to ask for precise explanations why the system decided to set the thermostat to a certain temperature. The system can lead to large savings in energy costs.&lt;/p&gt;

&lt;p&gt;Or take the case of a mapping application. As long as it gets you from point A to B in a safe and mostly reliable manner, you are not likely to ask why it took Daneel Street instead of Giskard Street. The benefits outweigh the consequences even if the system gets it occasionally wrong.&lt;/p&gt;

&lt;p&gt;We do not necessarily have to make a trade-off (if one even exists) between utility and explainability.&lt;/p&gt;

&lt;p&gt;As we enter the third decade of the 21st century, we can see the shape of the future to come. AI Systems are a real and important part of this future, and methods to increase the trust in these systems can only make them a more valued part of our society.&lt;/p&gt;

&lt;p&gt;And perhaps this is a precursor to that distant dream of true Artificial General Intelligence – but that’s a subject for another time!&lt;/p&gt;

</description>
        <pubDate>Wed, 20 Jan 2021 22:06:00 -0500</pubDate>
        <link>https://adityanag.com/journal/2021/01/20/should-all-ai-systems-be-explainable/</link>
        <guid isPermaLink="true">https://adityanag.com/journal/2021/01/20/should-all-ai-systems-be-explainable/</guid>
        
        
        <category>journal</category>
        
      </item>
    
      <item>
        <title>Ubuntu 18.04 LTS on a 2017 (5th Gen) Thinkpad X1 Carbon</title>
        <description>&lt;p&gt;I’ve recently decided to return to my open source roots. I’ve been using Windows 10 for the past couple of years, and OS X for a few years before that.. but back in the mid-2000’s, I ran only Linux on all my machines.&lt;/p&gt;

&lt;p&gt;For one reason and another I’ve been growing frustrated with the state of proprietory operating systems, especially with all the emphasis on pushing all your data to the cloud and making everything subscription based. I can’t be the only one that just wants a simple machine that works without constantly wanting to nag me with “helpful suggestions”?&lt;/p&gt;

&lt;p&gt;So this past weekend, I decided to try installing Linux on my laptop. I didn’t want to dual-boot, because the only way to really get to know an OS is to use it full time. I did leave myself an escape hatch though, just in case things go wrong, by using a completely new SSD. I took out the SSD with Windows on it and kept it aside, and put in a brand new 256 GB SSD. This blog post chronicles my first impresssions of Linux, in 2018, on a modern laptop.&lt;/p&gt;

&lt;h2 id=&quot;the-hardware&quot;&gt;The Hardware&lt;/h2&gt;

&lt;p&gt;I have a 2017 (5th Gen) Thinkpad X1 Carbon. It has 16 GB of RAM, and an Intel Core i7-7600U CPU @ 2.80GHz, so it’s no slouch, and the battery lasts anywhere between 5-9 hours in Windows. I chose the Full HD screen, so I don’t need to worry about High DPI support (more on this later). I checked online and apparently my machine is very well supported by all the major Linux distros, so I was feeling pretty good about trying out a few.&lt;/p&gt;

&lt;h2 id=&quot;fedora-28&quot;&gt;Fedora 28&lt;/h2&gt;

&lt;p&gt;I tried Fedora first. Installation went smoothly, and everything (with one exception) worked out of the box. Wifi, bluetooth, Thunderbolt, all the function keys for volume, brightness, etc.. everything except the fingerprint reader. I knew this going in so I wasn’t disappointed. However, I ran into a software issue that forced me to stop using Fedora.&lt;/p&gt;

&lt;p&gt;I have a Synology NAS, and I use CloudSync (Like Dropbox, but it syncs to my own NAS) to keep my files in sync between my various machines. I was unable to find an RPM for CloudSync - Synology provides a .deb instead. I did about 20 min of Googling, but didn’t find an immediate solution. A couple of forum posts said I could use alien to convert the deb… but I really couldn’t be bothered.&lt;/p&gt;

&lt;p&gt;This wasn’t too much of an issue since I was intending to try Ubuntu 18.04 LTS in any case. I tried Fedora just for the heck of it. I’m sure if I spent a couple of hours I would have found a solution, but I decided to move on to Ubuntu instead.&lt;/p&gt;

&lt;h2 id=&quot;ubuntu-1804-lts&quot;&gt;Ubuntu 18.04 LTS&lt;/h2&gt;

&lt;p&gt;Once again, installation was quick and easy. I like the new minimal install - instead of getting a lot of software, the minimal install just sets up a Firefox and the base Gnome system. There’s no Libre Office, or RhythmBox, or Thunderbird etc.&lt;/p&gt;

&lt;p&gt;Once again, all the hardware (except the fingerprint scanner) worked out of the box. I didn’t have to tweak anything at all. This was a pleasant surprise - yes, I know my particular laptop is very Linux-friendly, but there’s still something nice about not having to tweak config files.&lt;/p&gt;

&lt;h3 id=&quot;tweaks-and-software&quot;&gt;Tweaks and Software&lt;/h3&gt;

&lt;p&gt;Half the fun of using Linux is customizing your system to suit you. I spent a fun couple of hours getting re-acquainted with Gnome Extensions and Themes, and ended up with something that I really think works well for me. The feeling of not being beholden to a giant mega-corp is the cherry on top.&lt;/p&gt;

&lt;p&gt;Here’s a quick list of software that I installed&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;TLP -&amp;gt; https://linrunner.de/en/tlp/tlp.html&lt;/li&gt;
  &lt;li&gt;KeePassXC -&amp;gt; https://keepassxc.org/&lt;/li&gt;
  &lt;li&gt;Remmina -&amp;gt; https://remmina.org/&lt;/li&gt;
  &lt;li&gt;Visual Studio Code -&amp;gt; https://code.visualstudio.com/ (this is free and Open Source)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And the Gnome Extensions that I used&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Frippery Move Clock - moves the clock to the right. I have no idea why a clock in the middle of the screen makes any sense&lt;/li&gt;
  &lt;li&gt;Panel OSD - Notifications now show up on the right, not the middle of the screen&lt;/li&gt;
  &lt;li&gt;Transparent Gnome Panel - Does what it says on the tin&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As you can see, I didn’t go too crazy with the extensions. I know from experience that it’s best to keep things simple.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;It’s been a few days now, and things are really working well. I’ll post an update after a month, or possibly sooner.&lt;/p&gt;

</description>
        <pubDate>Sun, 12 Aug 2018 20:06:00 -0400</pubDate>
        <link>https://adityanag.com/journal/2018/08/12/ubuntu-1804-on-a-thinkpad-x1-carbon-5th-gen/</link>
        <guid isPermaLink="true">https://adityanag.com/journal/2018/08/12/ubuntu-1804-on-a-thinkpad-x1-carbon-5th-gen/</guid>
        
        
        <category>journal</category>
        
      </item>
    
      <item>
        <title>Wiring up ASP.Net Identity with Custom Classes</title>
        <description>&lt;p&gt;I looked for a simple tutorial that shows how to hook up ASP.Net (Core &amp;amp; .Net) Identity with user generated classes, and I couldn’t really find one. So here are my notes on how to do it. These are rough notes, written quickly. I’ll come back later and update this tutorial so it’s more comprehensive.&lt;/p&gt;

&lt;h1 id=&quot;the-goal&quot;&gt;The Goal&lt;/h1&gt;

&lt;p&gt;Create a simple CRUD app in ASP.Net Core 2.1 as well as ASP.Net MVC 5. This app should have user logins via the built in ASP.Net Identity service, and users should be able to save their data independently of other users. For example, a simple ToDo app. A user should be able to register, log in, and create, read, update, and delete their own todos - but other users should not be able to see their todos.&lt;/p&gt;

&lt;p&gt;Sounds pretty simple, right? Well, it took me a few hours to figure out.&lt;/p&gt;

&lt;h1 id=&quot;aspnet-core-21-example-razor-pages&quot;&gt;ASP.net Core 2.1 Example (Razor Pages)&lt;/h1&gt;

&lt;p&gt;Use Visual Studio templates to create a WebApp with Identities. Build and run to make sure we can log in. Nothing complex here, just tick the boxes in the wizard (or copy and paste into the terminal).&lt;/p&gt;

&lt;p&gt;Next, create a custom class. Something like this&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;public class TodoListItem
{
    public int Id { get; set; }
    public string Title { get; set; }
    public bool isCompleted { get; set; }

    public virtual string ApplicationUserID { get; set; }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The critical line is the last one. By adding in the ApplicationUserID, we’re setting up a relationship between the auto-generated tables created by ASP.net Identity, and our custom class.&lt;/p&gt;

&lt;p&gt;The next thing is to use scafollding to create the CRUD pages from this class. Again, just use Visual Studio’s built in features. Makes sure to choose the existing ApplicationDBContext so that the tables are properly linked.&lt;/p&gt;

&lt;p&gt;Once the scaffolding is done, we need to edit the code to bring in the currently logged in user’s User ID. In ASP.Net Core, this is how you do it&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ClaimsPrincipal currentUser = this.User;
  var currentUserID = currentUser.FindFirst(ClaimTypes.NameIdentifier).Value;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;These two lines of code are used in the controller or Razor code behind to get the userID. Once we have the user Id, we can simply set the object’s property directly before we save it.&lt;/p&gt;

&lt;p&gt;To retrieve the currently logged in user’s items, use this code&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ClaimsPrincipal currentUser = this.User;
    var currentUserID = currentUser.FindFirst(ClaimTypes.NameIdentifier).Value;
    ToDo = await _context.TodoItem.Where(k =&amp;gt; k.ApplicationUserID == currentUserID).ToListAsync();
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Again, the important bit is getting the UserID, and then using LINQ to get all items that have the same UserID.&lt;/p&gt;

&lt;p&gt;In ASP.NET MVC5, the user ID is returned in a different way.&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;var currentUserID = User.Identity.GetUserId();
    var currentUsersTodo = db.TodoListItems.Where(k =&amp;gt; k.ApplicationUserID == currentUserID).ToList();
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h1 id=&quot;adding-the-dbset-to-the-dbcontext&quot;&gt;Adding the DBSet to the DBContext&lt;/h1&gt;

&lt;p&gt;Once figured out, it’s quite easy to create classes that are linked to the current user. Remember, we have to add the DBSet to the existing DBContext&lt;/p&gt;

&lt;p&gt;This is different in .NET Core and .Net MVC.&lt;/p&gt;

&lt;h2 id=&quot;in-net-core&quot;&gt;In .Net Core&lt;/h2&gt;

&lt;p&gt;The file is called ApplicationDBContext.cs and it’s found inside the “Data” folder.&lt;/p&gt;

&lt;h2 id=&quot;in-net-mvc-5&quot;&gt;In .Net MVC 5&lt;/h2&gt;

&lt;p&gt;The DB Context is inside the IdentityModels.cs file, which is located in the “Models” folder&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;public DbSet&amp;lt;PriceTracker.Models.GroceryItem&amp;gt; GroceryItem { get; set; }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
</description>
        <pubDate>Sun, 29 Jul 2018 00:28:00 -0400</pubDate>
        <link>https://adityanag.com/journal/2018/07/29/wiring-up-asp-net-identity-with-custom-classes/</link>
        <guid isPermaLink="true">https://adityanag.com/journal/2018/07/29/wiring-up-asp-net-identity-with-custom-classes/</guid>
        
        
        <category>journal</category>
        
      </item>
    
      <item>
        <title>When a CMS is too complex</title>
        <description>&lt;p&gt;I spent the past few days working on Umbraco. I thought I might go back to using a CMS for this website, as I have some ideas for adding some features and functionality.&lt;/p&gt;

&lt;p&gt;Umbraco is really powerful, but after spending hours on wiring it up by hand, I decided that it was too much work for a simple site. Writing in markdown and publishing a static site is a lot easier in my particular use case.&lt;/p&gt;

&lt;p&gt;I still want to add some features to this site though, so I’m going to try and incorporate that under a sub-directory. That would mean moving this site from Netlify to my own server.&lt;/p&gt;

&lt;p&gt;Let’s see how it goes. For now, I’m keeping it simple.&lt;/p&gt;
</description>
        <pubDate>Sun, 03 Jun 2018 15:01:00 -0400</pubDate>
        <link>https://adityanag.com/journal/2018/06/03/when-a-cms-is-too-complex/</link>
        <guid isPermaLink="true">https://adityanag.com/journal/2018/06/03/when-a-cms-is-too-complex/</guid>
        
        
        <category>journal</category>
        
      </item>
    
      <item>
        <title>Using Linux through Visual Studio Code</title>
        <description>&lt;p&gt;Here’s how you can set the Windows Subsystem for Linux to be the default terminal inside VS Code. This allows for a very simple way to launch a bash prompt:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Ctrl + `
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The first thing to do is install the Windows Subsystem for Linux, and then your distro of choice from the Windows Store. There are tons of guides for this, so I’m not going into the details - just search “How to install Windows Subsystem for Linux” and pick a guide that works for you.&lt;/p&gt;

&lt;p&gt;Now that you have Linux installed and running, edit VS Code’s preferences. Use the keyboard shortcut:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Ctrl+,
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;or click on File &amp;gt; Preferences &amp;gt; Setting&lt;/p&gt;

&lt;p&gt;Use this line to set the default terminal&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&quot;terminal.integrated.shell.windows&quot;: &quot;C:\\Windows\\System32\\bash.exe&quot;,
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once this is done, save and restart VS Code. Now just use the Ctrl+` shortcut and VS Code will launch an intergated bash prompt with the current working directory set to the folder you have open.&lt;/p&gt;

&lt;p&gt;If you want to change the CWD, use this line&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&quot;terminal.integrated.cwd&quot;: &quot;C:\\Users\\$USERNAME\\AppData\\Local\\Packages\\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\\LocalState\\rootfs&quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I set mine to the root folder in bash. Your username and the package name will be different, of course. You can set the path to anything you like.&lt;/p&gt;

&lt;p&gt;This lets me write my blog posts, and then update my site from inside VS Code. Makes things nice and easy. It only takes me one quick command.&lt;/p&gt;
</description>
        <pubDate>Sun, 08 Apr 2018 01:37:00 -0400</pubDate>
        <link>https://adityanag.com/journal/2018/04/08/using-linux-through-visual-studio-code/</link>
        <guid isPermaLink="true">https://adityanag.com/journal/2018/04/08/using-linux-through-visual-studio-code/</guid>
        
        
        <category>journal</category>
        
      </item>
    
      <item>
        <title>Automated Deploy Pipeline to Netlify</title>
        <description>&lt;p&gt;Through the magic of simple bash scripting, I now have a very easy deployment pipeline for my website. I wanted to do this all in Visual Studio Team Services, but as far as I can tell, I’ll need to register a custom Linux deployment agent (aka a Linux VM running somewhere).&lt;/p&gt;

&lt;p&gt;Since I don’t want to do that, I used the Windows Subsystem for Linux, bash scripting, and the Netlify API to automate my deploys. This is how it works.&lt;/p&gt;

&lt;p&gt;I write my blog post in my text editor. I save it, and push the commit to VSTS - this is mostly just for backup, since the deploy is all happening on my machine. After pushing to VSTS, I run this simple script in WSL:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;o&quot;&gt;!&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;#/bin/bash
&lt;/span&gt;
jekyll build &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$PATH_TO_SITE&lt;/span&gt;/AdityaNag.com &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1 
zip &lt;span class=&quot;nt&quot;&gt;-r&lt;/span&gt; site.zip _site &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1
curl &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Content-Type: application/zip&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Authorization: Bearer &amp;lt;OAUTH_ID&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;nt&quot;&gt;--data-binary&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;@site.zip&quot;&lt;/span&gt; https://api.netlify.com/api/v1/sites/adityanag.netlify.com/deploys &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;This generates the site, zips it up, and deploys it to Netlify. The entire process takes between 25-40 seconds (my site isn’t very big). I could shrink that further by doing incremental builds or something, but I’m ok with this for now.&lt;/p&gt;

&lt;p&gt;Finally, I use the email notification feature in Netlify to notify me when the build is sucessfully deployed.&lt;/p&gt;

&lt;p&gt;I wish VSTS would provide a Linux agent that has Jekyll built in. I suppose I could always switch to Hugo, but that’s a bigger project - maybe a weekend project!&lt;/p&gt;
</description>
        <pubDate>Thu, 29 Mar 2018 23:08:00 -0400</pubDate>
        <link>https://adityanag.com/journal/2018/03/29/automated-deploy-pipeline-to-netlify/</link>
        <guid isPermaLink="true">https://adityanag.com/journal/2018/03/29/automated-deploy-pipeline-to-netlify/</guid>
        
        
        <category>journal</category>
        
      </item>
    
      <item>
        <title>Moving to Visual Studio Team Services</title>
        <description>&lt;p&gt;Github pages are great, but they have one flaw - you have to have a public repo. This meant that I couldn’t really write drafts, or make future edits without the world knowing. I found that this was holding me back from updating my blog.&lt;/p&gt;

&lt;p&gt;So today, I moved my site’s repo from Github to Visual Studio Team Services. It’s free, with private repos. Unfortunately, VSTS doesn’t really have a good way to deploy to Netlify (Free static hosting? Yes please), without setting up a convoluted build pipeline that I frankly don’t have the patience to do.&lt;/p&gt;

&lt;p&gt;My interim solution is very simple. Write my posts on my local machines, and then use Jekyll via the Windows Subsystem for Linux to generate the static site. After that, simply zip it up and manually drag and drop into Netlify.&lt;/p&gt;

&lt;p&gt;I know I can automate this, and I probably will when it starts getting tedious, but for now, it’s good enough. From finishing a new post to publishing takes me less than a minute.&lt;/p&gt;

&lt;p&gt;Sometimes the easy solutions are the best.&lt;/p&gt;
</description>
        <pubDate>Tue, 27 Mar 2018 13:52:00 -0400</pubDate>
        <link>https://adityanag.com/journal/2018/03/27/moving-to-visual-studio-team-services/</link>
        <guid isPermaLink="true">https://adityanag.com/journal/2018/03/27/moving-to-visual-studio-team-services/</guid>
        
        
        <category>journal</category>
        
      </item>
    
      <item>
        <title>My own personal cloud</title>
        <description>&lt;p&gt;Last Saturday, I colocated my server. This server has been living under the guest room bed for the past year, being used for everything from test labs to providing NAS backup for my devices. It worked really well, letting me spin up VMs at will without paying Amazon or Microsoft for the privilege. However, we just moved to a new apartment which is smaller and doesn’t have FIOS, and now I have no good place for it.&lt;/p&gt;

&lt;p&gt;I thought about selling the server, but that doesn’t make sense as I do need something that acts as a testbed. And so I decided to colocate the server. I also looked at getting a dedicated server but those cost a LOT of money for the config I have (96 GB RAM, 8TB HDD, 24 cores) - I’d be paying hundreds of dollars a month for a similarly powerful server.&lt;/p&gt;

&lt;p&gt;Never having coloed before, I didn’t quite know what to expect. I didn’t want to ship the server, so it would have to be a local colo provider, and I started searching and calling around. After getting some eye-watering quotes, I finally found a provider that understood my homelab-ish needs and gave me what I consider a quite reasonable deal.&lt;/p&gt;

&lt;p&gt;I’m paying under $90 for a single 2U server, 2A of redundant power, 1 Gbps unmetered bandwidth, and a /29. There are a few caveats though - I told the provider that I’m running a homelab with minimal bandiwdth needs, so he agreed to give me a high speed connection with no metering. They’ll monitor the connection for three months and if I go past what they consider reasonable (more than 5 Tb, I think he said), they’ll let me know and we can talk about moving to a different plan. The same goes for power - if my server is routinely pulling down more than 3A, I’ll have to pay more.&lt;/p&gt;

&lt;p&gt;This really worked out well for me. I know I’m not ever going to hit those bandwidth or power limits, so I can effectively run this however I want. Having public IPs is really nice too, giving me the ability to run some services on my own hardware rather than pay Digital Ocean - I’m saving $15 a month there, so my colo bill comes down to $75.&lt;/p&gt;

&lt;p&gt;I have setup an IPSEC tunnel to the server so it feels just like it’s on the local LAN. Latency is around 30ms, and Remote Desktop and SSH work absolutely smoothly. I really should have coloed years ago. I’m saving $20 a month on power, $15 for my DO server which I no longer need, so for a sumtotal of $55 a month, I get a super high-speed connection to the net, my own cloud, and the ability to run it however I want.&lt;/p&gt;

</description>
        <pubDate>Tue, 01 Aug 2017 12:47:00 -0400</pubDate>
        <link>https://adityanag.com/journal/2017/08/01/my-own-personal-cloud/</link>
        <guid isPermaLink="true">https://adityanag.com/journal/2017/08/01/my-own-personal-cloud/</guid>
        
        
        <category>journal</category>
        
      </item>
    
      <item>
        <title>Summer is finally here</title>
        <description>&lt;p&gt;It’s almost the end of May here in Massachusetts, and it’s finally starting to get warmer. And I’ve been learning Xamarin Forms, though I’m toying with the idea of just going straight to native. I have invested a couple of years into C# though, so it’s hard to move away from right now. I’m hoping that Xamarin Forms works out well for me.&lt;/p&gt;

&lt;p&gt;I’ve created a basic CRUD app in Xamarin Forms, and it works well on my iPhone and my Android tablet. UWP works as well, of course, but I’m not focused on UWP till after the Fall Creators Update. I’m looking forward to learning some Fluent Design. I tried getting started with the SDK, but they hadn’t released all the bits when I looked. I’ll probably look again next month.&lt;/p&gt;

</description>
        <pubDate>Fri, 19 May 2017 15:41:00 -0400</pubDate>
        <link>https://adityanag.com/journal/2017/05/19/summer-is-finally-here/</link>
        <guid isPermaLink="true">https://adityanag.com/journal/2017/05/19/summer-is-finally-here/</guid>
        
        
        <category>journal</category>
        
      </item>
    
      <item>
        <title>The Windows Subsystem for Linux Changes Everything</title>
        <description>&lt;p&gt;I used to run my site on Wordpress. Roughly two years ago, I moved it to Jekyll. I did this as I was tired of managing Wordpress, and didn’t want to deal with running a Database driven site anymore. Also, I wanted to host on Github, removing the hassle of running a server.&lt;/p&gt;

&lt;p&gt;It’s worked well for the most part, but I’ve always had an issue with running Jekyll on my Laptop. I like to run Windows (though I have a Mac, and also run Linux), and it was a huge pain to get Jekyll running smoothly on Windows. So much so that I gave up. On Ubuntu, installing Jekyll is as simple as &lt;code&gt;apt install jekyll&lt;/code&gt;, and I would use my Linux VM if I really wanted to test something locally. The Mac is easier than Windows, but I much prefer apt to homebrew, and I find it easier to get good documentation for Linux tools rather than the same tools on a Mac.&lt;/p&gt;

&lt;p&gt;I started looking at the Windows Subsystem for Linux last year, and immediately realized that this is a game changer. Y’see, while I love Linux on the server, and all the lovely development tools and workflow, I’ve never really liked using it as a desktop. Back in the day, I ran purely on Linux for two years, but eventually gave up cause I got tired of constantly tinkering with my machine just to keep basic stuff running (editing X conf files for multi-monitor support..argh). So I ended up with a split workflow. Windows/OS X for daily activities and general computing, and Linux on the server.&lt;/p&gt;

&lt;p&gt;The Windows Subsystem for Linux (WSL) is brilliant cause it allows me to use Windows as a desktop, while still covering my Linux needs. Take this blog, for example. Here’s what I did to get it running on my local machine&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Install WSL&lt;/li&gt;
  &lt;li&gt;Git clone repo with one command &lt;code&gt;git clone reponame&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;apt install jekyll&lt;/li&gt;
  &lt;li&gt;jekyll serve&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s it. Now I’m writing this blog post on Windows with Visual Studio Code, while Jekyll runs in the background auto-generating the site everytime I hit save. Chrome is open, I hit F5, and everything updates. It’s magical!&lt;/p&gt;

&lt;p&gt;Inotify works, so Jekyll (running under Linux via WSL) notices that I’ve edited a file (in VS Code running under Windows) and it regenerates the site. THIS is how it should be.&lt;/p&gt;

&lt;p&gt;WSL in the new Creators Update covers 100% of my needs. I can run nginx, mysql, gdb, gcc, jekyll, node, .NET Core (which is fun, since I’m running .NET code on Linux on Windows). I can use Visual Studio to debug a Linux applicaiton running on my local machine. I can open a linux file in a windows app and a windows file in a linux app. I can use bash scripting and the power of sed, and awk, and grep and all the lovely bash tools to parse any file on my system. It’s truly the best of both worlds for me. Native SSH too.&lt;/p&gt;

&lt;p&gt;Microsoft has ensured that my next laptop purchase will definitely be a Windows laptop, and not a Mac (even as I write this on a 2015 Retina MBP, running Windows via VMware Fusion, with Linux on Windows on a Mac… what a time to be alive!).&lt;/p&gt;

&lt;p&gt;WSL is a game-changer and I can only imagine how much better it’s going to get.&lt;/p&gt;

&lt;p&gt;2017 is the year of Linux on a Desktop - but the Desktop is running Windows. Think about that, and marvel.&lt;/p&gt;
</description>
        <pubDate>Thu, 13 Apr 2017 01:31:00 -0400</pubDate>
        <link>https://adityanag.com/journal/2017/04/13/the-windows-subsystem-for-linux-changes-everything/</link>
        <guid isPermaLink="true">https://adityanag.com/journal/2017/04/13/the-windows-subsystem-for-linux-changes-everything/</guid>
        
        
        <category>journal</category>
        
      </item>
    
  </channel>
</rss>
