http://www.imtraum.com/Joshua Gall20182018-05-09T03:50:18Zhttp://www.imtraum.com/images/headers/home.jpgI was a teenage DoppelgängerJoshua Gallhttp://www.imtraum.com/blog/taking-notes-by-handWhy I scribe notes by hand2018-05-08T00:00:00Z<p>For years <a href="https://evernote.com/">I used Evernote</a> to scribe meeting notes, capture ideas, and itemize my to-do items. Several years ago <a href="http://bulletjournal.com/">I started keeping a bullet journal</a> to give myself a break from the glare of a computer monitor, and I've not looked back.</p>
<h1 id="evernote-notes-forever">Evernote, notes forever</h1>
<p>Evernote was a great program for me for a really long time, and I still have a lot of notes from ages past stored in it. I loved having my notes on all of my devices, and I loved that you can search your notes to find some obscure thought from the kickoff meeting for building the Ark. It's an easy program to use for anyone looking for a good note taking platform.</p>
<p>So why did I stop using it?</p>
<p>Simple really, and it has nothing to do with Evernote; I needed a break from computers.</p>
<p>I've been working professionally with a computer in front of me 1996, and personally since 1985. That's a long time <a href="https://www.aoa.org/patients-and-public/caring-for-your-vision/protecting-your-vision/computer-vision-syndrome" title="computer vision syndrome">and a lot of bright light</a> blasting me in the face; code, documents, games, videos, and entertainment. Even as dedicated as I am to the computer arts, I needed a break.</p>
<h1 id="enter-the-bullet-journal">Enter the bullet journal</h1>
<p>I had never used a journal before. I had never wanted to use a journal before. In my mind, a "journal" was a fancy word for a diary, and that wasn't something I was terribly interested in. But, one of my colleagues had been doing this bullet journal thing and talked me into watching a video or three that peaked my interest. He walked me through making an index and the legend of symbols to help keep me organized; he introduced me to the concept of collections, and I was hooked.</p>
<p>So I dove in. I dug up <a href="https://www.leuchtturm1917.us/notebooks/" title="a Leuchtturm 1917">a notebook that I got</a> at a Sitecore conference and my favorite comodity roller ball pen, a <a href="http://a.co/3154oJJ" title="My favorite non-luxury roller ball pen">Uniball Signo 207</a>. I created my index, numbered my pages, and created a legend so I wouldn't forget the symbols for things. I made the index for my first month and started journaling away in meetings, taking notes and capturing my to-do's. My journal was organized, I knew where everything was and just like Evernote, I had my journal everywhere I went. I enjoyed the peace of it; no clicking of keys, no bright monitor screaming into my face, no instant messages or email notifications or the 20 other sources of toast that distracted me in meetings. I was focused more than ever before, and while my penmanship was terrible and my hand would cramp due to my non-existent since high school handwriting experience, I loved it.</p>
<p>I'd walk into a meeting with my little notebook and my pen and everyone looked at me like I was insane. I was in a meeting without a computer; the technologist taking notes with grandpa level tech. I got a lot of snarky comments, and a good deal of ribbing, but there was a an upside that wasn't obvious to anyone but me.</p>
<p>After a short while something unexpected started happening; the thoughts I captured in the journal "stuck". I found that when I wrote something out, something worth writing anyway, that I'd remember it much more clearly than if I were to type things into Evernote. Coupled with my newfound ability to focus on the meeting without the distraction of a laptop, my engaged productivity went up. I attribute this to the simple fact that unlike typing, which is autonomous for me, writing requires a couple more brain cycles.</p>
<p>I genuinely believe that journaling has made me a better person, or at least made a portion of my life more enjoyable. And to top it off I now have a new thing to obsess over; fountain pens, nice pencils, and quality stationary.</p>
<h1 id="la-resistance">La Résistance</h1>
<p>Fellow nerds, put away your machines in meetings; They are heavy and distracting. Pick up something that makes marks and scratch your thoughts into something that won't get you in trouble. Better yet, get yourself a <a href="https://www.gouletpens.com/notebooks/c/10/?sortBy=productName+asc&facetValueFilter=tenant%7Epaper-collection%3Aleuchtturm1917-hardcover">Leuchtturm 1917</a> or a <a href="https://www.gouletpens.com/notebooks/c/10/?sortBy=productName%2Basc&facetValueFilter=tenant%7Epaper-collection%3Arhodia-webnotebook">Rhodia Webnotebook</a> and <a href="http://www.retro51.com/fwi_tor_vintage.html" title="Retro 51 Fine Writing Instruments">a good quality rollerball pen</a>. You can thank me later.</p>
<p>Also, please... PLEASE, brush up on your penmanship. I'm embarrassed on your behalf and we don't even know each other.</p>
<p>For years <a href="https://evernote.com/">I used Evernote</a> to scribe meeting notes, capture ideas, and itemize my to-do items. Several years ago <a href="http://bulletjournal.com/">I started keeping a bullet journal</a> to give myself a break from the glare of a computer monitor, and I've not looked back.</p>Joshua Gallhttp://www.imtraum.com/blog/the-importance-of-storyThe Importance of Story2018-04-28T00:00:00Z<p>Creating a compelling story is one of the most important skills a technology leader must develop. You'll use storytelling to convince your boss that your ideas are good ones, and influence your peers to take your advice.</p>
<p>Good storytelling is a cornerstone of influence, and influence is how you get things done.</p>
<h1 id="the-death-of-good-ideas">The death of good ideas</h1>
<p>I've worked with countless technologists who had fantastic ideas that never went anywhere. Ideas that could have, if implemented with excellence, changed the fate of projects, departments and companies. I've watched these ideas die before they could be presented, I've watched them die the slow death suffered by those with no influence, and I've watched them die on the vine. I've watched software and network engineers, project managers and business analysts all suffer the dismay of knowing they were right and have absolutely no idea how to get support and buy in.</p>
<p>Why? If they were good ideas, why didn't they go anywhere? Wasn't someone listening?!</p>
<p>They lacked a fundamental facet needed to sell all ideas; a good story.</p>
<h1 id="influence-over-authority">Influence over authority</h1>
<p>It's rare that we have absolute authority over anything in life. In our professional life, our ideas will be bound by constraints that our peers or superiors will impose and control. If you're lucky, you might only need their blessing, but more than likely you'll need their resources and probably their active support. These both come in finite quantities that you will compete for; Jeff in accounting has good ideas too.</p>
<p>It is advantageous to leverage influence rather than authority to see your ideas blossom and bear fruit, and good storytelling is one of the tools we're going to use to drive engagement. A good story is critical to the influence of others, and thus critical to you seeing your initiatives succeed. Even if you have a high degree of credibility, your ability to convince others that your idea is worth pursuing without spending your political capital requires you to have a compelling perspective.</p>
<p>We've all seen the terrible idea that somehow manages to get more interest and support than our genius business saving profit multiplier and said to ourselves, WTF?! Generally speaking, and aside from "politics", that bad idea was more than likely presented in a way that was more attractive than yours. The other person was just better at being heard, taken seriously, or viewed with more credibility than you at that point in time.</p>
<h1 id="being-a-storyteller">Being a storyteller</h1>
<p>Forming the story that supports your idea can be, but doesn't have to be complex. There are however, a few things to keep in mind.</p>
<h2 id="know-your-audience">Know your audience</h2>
<p>Above everything, you need to know your audience. This can be a boss, a peer group, a board of directors, a user group or for larger initiatives, any combination of these. Take care when identifying your audience; if you include the wrong people or exclude the right ones, you're going to have an uphill battle to navigate. Knowing your audience is by far the most important and difficult aspect of storytelling and it requires that you flex yourself rather than expect them to accommodate you. With experience you'll get better at this part of things but be patient, it can be painful when you're wrong or mis-read a situation.</p>
<p>In my day to day, I commonly find myself both formally and informally presenting to people with a wide assortment of interest in my work. These conversations are usually non-technical and based in areas of finance and resource management, contracting and compliance, in both business and semi-social settings. To be successful, I need to tell complex technical stories in terms that non-technologists find <em>valuable</em>, and so I have to put on my finance hat when discussing the nuances of a financial constraint on a project, or my legal hat when navigating a compliance or contracting concern. This does not mean I need to be an accountant or an attorney, but it does mean I need to be able to anticipate questions or concerns that come from those disciplines; I need to flex from my technical comfort zone into areas where I may not demonstrate as much acumen as those who live them day in and day out.</p>
<p>Tailoring your story to the need of your audience removes the burden of them needing to know aspects of your initiative that aren't pertinent to their background, and makes your story easier to absorb. It allows them to focus on the part of your story that they care about, and if you tell it well (by addressing their needs), you'll likely gain a supporter.</p>
<p>A word of caution; don't assume the people you are engaging are one dimensional, that the attorney will only have concern for the legal position, or that the project manager doesn't care about quality assurance, or that the architect doesn't care about resource planning. Your audience will have diverse interests and background, and you need to account for it. Ignore this fact at your peril.</p>
<h2 id="compose-your-story">Compose your story</h2>
<p>This is largely a mental exercise, summed up simply as "know your audience and pull your shit together". If you want to convince your leaders that refactoring a large part of your codebase is a good idea, be prepared to explain to them why. Most likely, the "why" from a technical standpoint will be easy for you, and if your leader is an engineer your story will be easier than if they are a project manager or marketing person. They will care more about resource needs and financial investment than they will the benefit of a cleaner, more consistent codebase. Remember, it's not about the black and white perspective we want them to see our story from, but the reality that they will care about the whole initiative, just some areas more than others. Sometimes, much more.</p>
<p>Use this to your advantage.</p>
<p>Many people, especially those with nominal exposure to high-tech might look at your ideas primarily through the lens of return on investment (ROI). They may want to know the full cost of your initiative; licensing impact, resource needs, financial investment, etc., measured against the benefit of the project over some period of time. Depending on the size and type of project and your companies financial strategies, they may consider depreciation or amortization, which will add additional complexity for you to address.</p>
<p>It is critical to understand that a person with an ROI focus values other aspects of your project, just not as much as the black and white ROI aspect. In my opinion, ROI is important but a bit lazy if it's the only lens used to view your project and I guarantee, you will encounter this obstacle.</p>
<p>For the refactoring example I mentioned, this can be a very difficult picture to paint. From a raw <em>business</em> perspective, a project that doesn't directly impact the consumer or the bottom line will require more creativity on your part to sell. Compared against a project to improve search performance, decrease conversion times or increase stability, projects without a direct impact will require you to be a better storyteller. You'll need to highlight the benefits of operational efficiency, risk management or that it's an evolution of technology that enables you to focus on new ways of doing things which in turn paves the way to innovate more easily in the future.</p>
<p>Or that sometimes, it's just the right thing to do. Don't discount the potential to include morality into your story, it can be a powerful cannon to field. Just be aware that it can blow up on you in irksome ways.</p>
<h2 id="on-voice-and-tone">On voice and tone</h2>
<p>You need to be genuine, so establish a voice and tone that works for you. For me, conversational, practical and personal are the most natural ways for me to communicate; and that comes out in my communication style. This will likely not be true for you, and you'll have to spend some time figuring this out for yourself.</p>
<p>Whatever it is, establish a consistent voice and it will build trust through predictability.</p>
<h2 id="on-presentation">On presentation</h2>
<p>You've got your idea, you spent time to identify your audience, and you've framed your story; now it's time to start socializing it. If you're thinking this is just a one-off presentation, you're wrong and you'll get nowhere. You should be having conversations in your one on ones and requesting time from people to get their initial impressions. Be wary of "hallway conversations" with those you don't have solid working relationships with as these can work against you. The last thing you want to do is freak out a peer of your boss that you don't know well with your super expensive, resource intensive, business changing idea while standing outside the bathrooms.</p>
<p>Your story should have a beginning, middle and end that scales with the situations and timescales with which you'll be communicating under. Can you perk my interest in a <a href="https://www.youtube.com/watch?v=OWwOJlOI1nU" title="Danger, Will Robinson!">two minute hallway conversation</a>? Keep me engaged in a 45 minute formal presentation? Generate excitement in a 90 minute or half day working session?</p>
<p>How well does your story scale? If it doesn't, it needs to.</p>
<p>In any scenario where you're engaging your audience you need to be able to communicate effectively. Many people think that presenting is simply occupying the lead chair in a room full of people and driving the conversation with a (hopefully) well crafted deck. This is only partly true, and it is folly to tell your story without considering other aspects critical to storytelling beyond "the speech".</p>
<p>Your comportment, e.g., personal appearance, body language, composure, and presence are often overlooked and in some ways are just as challenging to get right as the speaking part. This is your outward appearance and it is a critical part of building credibility and influence. You need to be natural here, to be yourself, so don't try to be someone you aren't. At the same time, your audience will have it's own needs of your comportment in order to be influenced, and this is a tricky thing to balance against who you are as a person.</p>
<p>Generally, I believe it's more important to be genuine and passionate than to put on a suit and a too tight tie if that's not your thing. Be aware that many traditional business people will disagree with me here...</p>
<p>Get these right and you should be fine: Sit up straight, plaster on a smile and be confident, and don't fidget. Wear clothes that fit. Don't be smelly. Don't over think it. Really, it is this simple.</p>
<p><strong>Fascinating Josh fact:</strong> I don't look at eyes when I'm maintaining eye contact with my audience. I look at noses, they're far less intimidating and no one can tell...</p>
<h3 id="to-powerpoint-or-not-to-powerpoint.just-dont">To Powerpoint or not to Powerpoint... just don't</h3>
<p>And now to start a flame war with a LOT of people who fancy themselves good presenters:</p>
<blockquote class="blockquote">
<p>Powerpoint is a crutch few know how to walk with - Joshua Gall</p>
</blockquote>
<p>Don't get me wrong, Powerpoint has it's place. It can be a powerful tool in the hands of someone who knows how to use it and has advanced presentation and storytelling skills. My boss is one of these people, and I've seen him "throw some slides together" in an insanely short amount of time; it boggles the mind. He's also fantastic on his feet and has a natural presentation style that is complimented by his deck, which by the way is entirely useless to anyone else who wants to use it to present.</p>
<p>Ask me how I know...</p>
<p>I avoid Powerpoint as much as possible, and I find myself staring at it for an age and a day getting nothing actually done when forced to use it. I'm just not wired for creating disjointed slideshows where my public speaking ability is expected to be the concrete that holds it all together. Based on the dozens of crap presentations I see every year, this is probably true for you as well; sorry (not sorry) for the bad news.</p>
<p>A few of the "Sins of Powerpoint™" that I commonly see, and have been guilty of myself:</p>
<ul>
<li>The slides distract from the story (too. many. memes.)</li>
<li>The slides are read to the audience (omg please stop doing this)</li>
<li>The slides are too dense / sparse</li>
<li>The deck is poorly designed and looks sophomoric</li>
<li>The presenter doesn't engage the audience effectively
<ul>
<li>Engaging body language when the audience should focus on what you say</li>
<li>Remain still when the audience should be absorbing information on the deck</li>
</ul>
</li>
</ul>
<p>If you're going to Powerpoint, and there will be cases where you are required to, learn to use it as part of your presentation and don't do those things above. Especially reading your slides, even a little. By the time you're done mumbling through things, I've already read the slide and am thinking about what I want to eat later. Or pandas. Probably pandas.</p>
<p>I stopped listening, and I'm not alone.</p>
<p>As an alternative, consider the following:</p>
<p>Pull together a one or two, possibly more, page narrative as a pre-read to a conversation (meeting) that you're going to schedule. Then, book a meeting (with an agenda please!) with your audience and attach your document with a polite request to read it before the meeting. When the meeting comes, pass out a printed copy and present it's information (do not read it aloud) as you would a Powerpoint for the slackers (busy people who go to meetings for a living) who didn't read it beforehand. Next, turn the presentation into a conversation that you lead by asking for feedback and perspective. Finally, summarize the conversation in an email, reinforcing the discussion and any to-dos that people have and distribute to the attendees.</p>
<p>Do this within a few days of the conversation or <a href="https://en.wikipedia.org/wiki/Momentum">it will loose momentum</a>.</p>
<h3 id="dont-wear-leather-suits-at-work">Don't wear leather suits (at work...)</h3>
<p><strong>warning:</strong> intentionally ambiguous story ahead...</p>
<p>Years ago I interviewed someone for a position at a company that provided software services to a very conservative industry, and he had a fantastic story. Like most interviews, it started on the phone and after a few minutes I was hooked. He was engaging, charming, articulate and clearly intelligent without being arrogant, and so we brought him in for an in-person meeting.</p>
<p>He wore a leather suit. A suit... made of leather. I'll let your mind wander...</p>
<p>I don't really even know what to say. I mean, he looked good, our meeting went really well and he was just as charming and engaging as he was on the phone. But c'mon, how can anyone take you seriously for a job interview when you're sitting across the table, creaking away as you take a sip from your glass of water. Even me, morbidly amused, envisioned numerous better wardrobe choices he could have made.</p>
<p>He didn't get the job.</p>
<p>This guy, who had so many things going for him, lost the opportunity simply because he failed to recognize one simple and important fact about job interviews; you have absolutely no idea who your audience is. He failed to adjust his own need to look fashionable(ish) to those of his audience who needed him to be credible.</p>
<p>I bring up this blast from the past to reinforce the importance of comportment and knowing your audience I mentioned earlier. Your outward appearance can undo all of your hard work. Sorry (not sorry) about the dose of reality here.</p>
<h3 id="the-influence-buffet">The influence buffet</h3>
<p>It's easy to think that after all this work you've done you're finished, and sometimes (rarely) you might be. More often than not you'll need to rinse and repeat this process many times, catching people who may be at different states of acceptance of your idea and nudging them in the direction you want. You'll have to be careful at this point to not become a pest or an annoyance for your audience, but honestly, sometimes it's unavoidable.</p>
<p>Just be aware of how your story is being received and work at it over time. This process could take days (awesome) or months and sometimes much, much longer.</p>
<p>My boss and I spent years telling the story that ultimately convinced our leaders to invest in the creation of our Digital team. We agonized over who to engage and when, which powerpoint needed to convey what message, what business drivers to engage and what industry trends were pointing to. We created a base operating model, budget, and organizational structure that could deliver on the promises we were making. And finally, we perpetually adjusted all of these things based on the feedback, constraints, and challenges we encountered.</p>
<p>None of this happened in a single meeting, but dozens upon dozens of formal and informal meetings, conversations, and more than a few lunches.</p>
<p>Many years later we are both asked by other business and technology leaders how we did what we did, and the answer is simple; we told a masterful story.</p>
<h1 id="final-thoughts">Final thoughts</h1>
<p>Storytelling is hard, the actually hard kind of hard. It is as hard to learn to do well as it is to learn to become an engineer, or project manager or an executive. This will be a personal journey for you, and you'll have to work on yourself in order to realize the full benefit of what I'm only scratching the surface of here.</p>
<p>There are a lot of books on this, and derivative topics for you to read. There are countless other sites online that can give you advice. Read everything you care to, but don't look for magic bullets and pixie dust; they don't exist. Instead, like all educational pursuits, absorb their information, learn from them, and make up your own mind.</p>
<p>Even better, find yourself a mentor.</p>
<p>You'll be better for it.</p>
<p>Creating a compelling story is one of the most important skills a technology leader must develop. You'll use storytelling to convince your boss that your ideas are good ones, and influence your peers to take your advice.</p>Joshua Gallhttp://www.imtraum.com/blog/deploy-wyam-to-azure-web-appDeploy Wyam to an Azure Web App2018-04-18T00:00:00Z<p>Now that I've got Wyam running on my workstation, I need to work out the deployment methodology for publishing content to my Azure Web App. My objective is to have a painless, largely automated process that allows me to focus on writing.</p>
<h1 id="why-web-apps">Why web apps?</h1>
<p>First, I think it's important to touch on why I'm hosting via an Azure Web App, especially when there are other options for hosting static sites that probably cost less. My reasoning is simpler than you might think; Web Apps are easy and I'm comfortable with them. I could go into all the benefits of a PaaS IIS model, but there are hundreds of articles out there on that topic.</p>
<p>Google is your friend and can find things for you...</p>
<p>I had seriously considered hosting the site via CDN / Azure Storage for a few pennies per month, but the Web App has one huge benefit for me; if needed I can run an MVC application along side the content generated by Wyam. This would allow me the luxury of some element of dynamic functionality without the overhead of a full CMS.</p>
<p>I think a retro, 90's style page hit counter is in order.</p>
<h1 id="deployment-options">Deployment options</h1>
<p>Deployment to a Web App can be done a number of ways, and like anything you'll need to pick the one that makes the most sense for you. For the Wyam use case, I think these options are the most practical, but YMMV.</p>
<h2 id="visual-studio-publishing">Visual Studio Publishing</h2>
<p>For me, the least desirable option is to deploy using Visual Studio. You could load your generated content into a Visual Studio solution and use it's publish functionality to push everything up to the Azure Web App.</p>
<p>Considering the other options available, I think this idea is dumb. Don't be dumb. Just. Don't.</p>
<p>Will it work? Yes.</p>
<h2 id="ftp">FTP</h2>
<p>Arguably the simplest and most ancient of our options, FTP is how us old-timers moved things around back in the 1200's. I'm pretty sure the Magna Carta was distributed via plain text on a public FTP server.</p>
<p>FTP is a completely viable option for our Wyam generated site. <a href="https://docs.microsoft.com/en-us/azure/app-service/app-service-deployment-credentials" title="web app deployment credentials">Create deployment credentials for your Web App</a>, look up your FTP info in the Azure portal and load them into your favorite FTP application to upload the content of your output directory to wwwroot on the server.</p>
<p>If you're feeling like a masochistic internet archaeologist, command line could be fun. Just upload the contents of Wyam's output directory to the wwwroot folder on the Web App and you're golden.</p>
<h2 id="git-iis-and-kudu">Git, IIS and Kudu</h2>
<p>This is the option I've decided to go with.</p>
<p>Each Web Application has an associated service site that runs Kudu as well as a series of other supporting site extensions. Kudu is a significant topic itself and could warrant many posts on what you can do with it. Those are already written by folks with more Kudu smarts than me. Again, do some Googling.</p>
<p>For our purposes, let's consider Kudu in the context of deploying code that exists in a local (on the server) Git repository. I'll walk through workflow a bit later on, but essentially you'll leverage a Git repository on your workstation to manage your Wyam site (not Wyam itself) <a href="https://guides.github.com/introduction/flow" title="Learn about Gitflow">using whatever Git workflow you use</a>. When you're ready to deploy to your site, you'll push your local master branch up to Azure, watch some wheels spin, and in short order your updates will be live.</p>
<h3 id="getting-code-to-the-server">Getting code to the server</h3>
<p>There are two ways to get your workstation's Git repo onto the server:</p>
<ol>
<li>Create a public or private repo on Github and push your repo to it, then configure a web hook to link your Web App and your Git repo</li>
<li>Push your code directly to the repo that sits alongside your site in Azure</li>
</ol>
<p>I won't get into the nitty gritty of setting this up, because again, it's documented everywhere. Just pick one of those options, and get either a <a href="https://blog.github.com/2015-09-15-automating-code-deployment-with-github-and-azure" title="automating code deployment with github and azure">web hook configured with your Github repo</a>, or <a href="https://docs.microsoft.com/en-us/azure/app-service/app-service-deployment-credentials" title="web app deployment credentials">set up some deployment credentials</a> and <a href="https://git-scm.com/book/en/v2/Git-Basics-Working-with-Remotes" title="working with remotes">configure a remote</a> on your workstation repo.</p>
<p>I went with option 2; push directly to the Azure repo.</p>
<p>For me, this was just simpler. Not that Github repos are hard by any stretch, but it felt like an unneeded middleman with a lot of functionality that I'm never going to need for my blog; It's just a blog.</p>
<h3 id="building-with-kudu-and-configuring-iis">Building with Kudu and Configuring IIS</h3>
<p>This is the part where things get fancy; we're going to modify the default Kudu deployment process. Don't worry, it's not hard or scary.</p>
<p>When your Azure repository gets updated, either by you pushing to it, or by the Github web hook doing whatever it does, the build process will start. It's awesome, but we need to do some work before Wyam will build anything.</p>
<ol>
<li><p>Upload a release of Wyam - Preferably a new one</p>
<p>Before we get started, here's the directory structure for an Azure Web App as it relates to our Wyam setup. Yes there are more directories in there, but we don't care about them right now, we only care about these three.</p>
<pre><code>> D:\home\site
> D:\home\site\repository
> D:\home\site\wwworoot
</code></pre>
<p>We need to get a compiled version of Wyam onto the server. For a Web App you can either FTP it up with whatever program you use or if you are especially lazy you can do it via the Kudo service site. Find it via the portal, or just cheat and enter this URL making sure to change "your-app-name-goes-here" to the actual app name you configured in Azure.</p>
<pre><code>https://your-app-name-goes-here.scm.azurewebsites.net
</code></pre>
<p>Click on the Debug console in the header, and then CMD or PowerShell, whichever you are most comfortable with.</p>
<p>Just drag the Wyam.zip file onto the folder you want to upload to, and boom, it will upload and unzip for you. I dragged Wyam.zip to the 'site' directory, which gives me this directory structure where the Wyam directory is filled with the contents of the zip.</p>
<pre><code>> D:\home\site
> D:\home\site\repository
> D:\home\site\wwworoot
> D:\home\site\Wyam
</code></pre>
</li>
<li><p>Configure IIS</p>
<p>To host a Wyam generated static site in IIS, you'll need to configure it to support extensionless URL's and the specific MIME types used by Font Awesome. To do this, we're going to create a web.config file and put it in the "input" directory of your Wyam site.</p>
<p>Here's mine, tweak yours as needed (you shouldn't need to):</p>
<pre><code class="language-XML"><configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="html">
<match url="(.*)" />
<conditions>
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
<add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
</conditions>
<action type="Rewrite" url="{R:1}.html" />
</rule>
</rules>
</rewrite>
<staticContent>
<remove fileExtension=".svg" />
<remove fileExtension=".eot" />
<remove fileExtension=".woff" />
<remove fileExtension=".woff2" />
<remove fileExtension=".rss" />
<mimeMap fileExtension=".svg" mimeType="image/svg+xml" />
<mimeMap fileExtension=".eot" mimeType="application/vnd.ms-fontobject" />
<mimeMap fileExtension=".woff" mimeType="application/font-woff" />
<mimeMap fileExtension=".woff2" mimeType="application/font-woff2" />
<mimeMap fileExtension=".rss" mimeType="application/rss+xml" />
</staticContent>
</system.webServer>
</configuration>
</code></pre>
</li>
<li><p>Modify the Kudu build process</p>
<p>Now that we have a Wyam release on the server and a web.config that will set up IIS for serving static content, we need to tweak our Kudu deployment. All we need to do is add two text files to the root of our workstation repository.</p>
<ol>
<li>.deployment</li>
<li>deploy.cmd</li>
</ol>
<p>The <em>.deployment</em> file is a configuration file (old school INI) that lets you configure the deployment process. The command value is just a relative pointer to our command file. There are other options that will work here, but this is simple and I like simple.</p>
<pre><code class="language-INI">[config]
command = deploy.cmd
</code></pre>
<p>The <em>deploy.cmd</em> file will contain all of the command line instructions for building and deploying our site with the copy of Wyam we just uploaded.</p>
<p>It's been about 92 years since I wrote any windows command line scripts that weren't PowerShell, so I'm sure someone can tell me all the reasons that this is crap. I based this on the default Kudu deployment command, so for now it works and I'm too lazy to pretty it up.</p>
<pre><code class="language-CMD">@if "%SCM_TRACE_LEVEL%" NEQ "4" @echo off
setlocal enabledelayedexpansion
IF NOT DEFINED WYAM_EXE (
SET WYAM_EXE=D:\home\site\Wyam\wyam.exe
)
IF NOT DEFINED WYAM_CONFIG (
SET WYAM_CONFIG=D:\home\site\repository\config.wyam
)
IF NOT DEFINED WYAM_OUTPUT (
SET WYAM_OUTPUT=D:\home\site\repository\output
)
IF NOT DEFINED WWWROOT (
SET WWWROOT=D:\home\site\wwwroot
)
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:: Deployment
:: ----------
echo Handling Wyam Based Web Site deployment.
echo Generating the site and outputing it to the output directory.
call %WYAM_EXE% %WYAM_CONFIG%
echo Copying files to wwwroot.
xcopy %WYAM_OUTPUT% %WWWROOT% /S /Y
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
goto end
:end
endlocal
echo Finished successfully.
</code></pre>
<p>My recommendation here is to just take my two files, put them in the root of your repository and be done with it. If you want to get clever, play with it after you know things are working.</p>
<p>Now that you have those two files added, your workstation repository working directory should look something like this:</p>
<pre><code class="language-CMD">input
output
.deployment
.gitattributes
.gitignore
config.wyam
config.wyam.dll
config.wyam.hash
config.wyam.packages.xml
deploy.cmd
</code></pre>
<p>BTW, make sure your .gitignore looks like this:</p>
<pre><code>tools/
output/
config.wyam.packages.xml
config.wyam.hash
config.wyam.dll
</code></pre>
</li>
<li><p>Time to deploy</p>
<p>Commit it all to the master branch and <a href="https://www.youtube.com/watch?v=vCadcBR95oU">push it</a> to Azure. NOTE: When I named the remote for the Azure repo on my workstation repo, I called it "azure".</p>
<pre><code class="language-CMD">> git push azure master
</code></pre>
<p>As your repo is being pushed to Azure, you'll see it print out the standard Git statuses to your shell. What's cool here is that once Wyam starts running, you'll also see its output as well which is awesome for remote troubleshooting.</p>
<p>Be aware that the first time Wyam runs, it's going to download all of its dependent .nuget packages just like it does on your workstation. It might take a few minutes, so be patient and let it do it's thing, it will go much faster in subsequent deployments.</p>
<p>Once it's done and prints out the success message, open up a browser, visit your site, and marvel at how easy publishing your site is.</p>
</li>
<li><p>Lazy people extra credit</p>
<p>I believe <a href="https://www.hanselman.com/blog/DoTheyDeserveTheGiftOfYourKeystrokes.aspx" title="do they deserve the gift of your keystrokes">we only have a limited number of keystrokes in us</a>, and I don't want to type "git push azure master" every time I am ready to publish the site, so I made a Git alias:</p>
<pre><code class="language-CMD">> git config --global alias.deploytoazure push azure master
</code></pre>
<p>Now I can deploy the site by simply typing:</p>
<pre><code class="language-CMD">> git deploytoazure
</code></pre>
</li>
</ol>
<h1 id="final-thoughts">Final thoughts</h1>
<p>I'm really happy with how this turned out. I've got my Web App purring along, hosting my static site and I can focus on writing rather than running my blog. When I'm ready to publish content I only type in two words and watch the status messages flow. The site is simple and scalable and I don't worry about backups or plugins or updating or patching or any of the maintenance issues that you have with a more traditional blogging platform.</p>
<p>I'd love to hear what you think, drop me a comment below.</p>
<p>Now that I've got Wyam running on my workstation, I need to work out the deployment methodology for publishing content to my Azure Web App. My objective is to have a painless, largely automated process that allows me to focus on writing.</p>Joshua Gallhttp://www.imtraum.com/blog/blogging-with-wyamBlogging with Wyam2018-04-15T00:00:00Z<p>This weekend I ported my blog from static content generated by Octopress to the static site generation platform, Wyam. After several years ignoring the site based on the Jekyll derivative, I felt it was time to move to a generator based on technologies that allow me to focus on writing.</p>
<h1 id="the-wordpress-aggravation">The Wordpress Aggravation</h1>
<p>I started this blog with Wordpress back in 2006 as a way to share some nerdy things that I do on a daily basis at work. Like all sites, Wordpress ones have to be hosted somewhere. Yeah, I know it can be hosted at a hundred places for cheap, but that would mean a hard hit to my nerd ego. So, like so many others I built a LAMP server under my desk to host my site and quickly found that I didn't want to be a Wordpress admin.</p>
<p>But, I started to blog and like so many before and after me, found that out of the box, Wordpress needs a few plugins to be a solid platform. Unfortunately, I ran into constant problems on something that was supposed to be fun, and spent a lot of time battling formatting, compatibility issues and making sure everything was backed up.</p>
<p>The final nail in Wordpress' coffin came in the form of never-ending security issues, which made running the platform something I just didn't want to deal with anymore. I hated it, and so I didn't blog. All the hassle reinforced that I wanted to play World of Warcraft more than I wanted to feed my nerd ego.</p>
<p>BTW, notice that there isn't any content on here prior to 2008? That's because my backups worked, but in reality didn't. I learned a hard truth back then:</p>
<blockquote class="blockquote">
<p>"Backups always succeed. It's restores that fail." <em>- Scott Hanselman</em></p>
</blockquote>
<h1 id="onward-with-octopress">Onward with Octopress</h1>
<p>At some point I got tired of dealing with the infrastructure of a platform I wasn't even really blogging with, and decided to find something that would be simpler to host. A friend had turned me on to the concept of static site generators, and after some digging I found Octopress.</p>
<h2 id="static-sites-ftw">Static sites ftw</h2>
<p>Static sites are the antithesis of Wordpress from a hosting standpoint; they aren't an application that needs services beyond just serving out simple HTML pages and other static assets. Because of this you can get away with a much simpler and far more secure hosting environment. Without an application and it's supporting frameworks, other than the server itself, there is very little to hack.</p>
<p>Compared to most modern CMS platforms, static sites are primitive, harkening back to the earliest days of the interwebs.</p>
<p>They are awesome.</p>
<h2 id="my-problem-with-octopress">My problem with Octopress</h2>
<p>Octopress was bitter-sweet for me. On one hand it solved all of my infrastructure issues and let me move my blog to a much simpler hosting environment: Azure Web Apps. It was inexpensive to host, managed by Azure and it just worked. Now I could focus on writing rather than running my blog, and maintain some nerd cred.</p>
<p>With Octopress I could write my posts using markdown with a simple text editor. I'd not have to deal with the formatting problems I was plagued with in Wordpress, and if I wanted I could blog with Notepad. Since I'm not a crazy person, I used Sublime Text and life was good.</p>
<p>Or was it?</p>
<p>Octopress is based on Jekyll and uses Ruby. This is fine; I don't participate in technology holy wars. But after using Octopress for a while I came to appreciate a hard reality; I'm fully vested in the Microsoft ecosystem and the Ruby thing was going to take more time to get comfortable with than I was willing to put in. Octopress did some nice things for Jekyll, but the tech gap demotivated me from blogging for nearly 5 years.</p>
<h1 id="the-new-hotness">The new hotness</h1>
<p>I discovered Wyam sometime in mid-2017.</p>
<p>I wasn't actively writing (clearly), but I was considering what I should do with my blog. Should I kill it or keep it? At this point in my life I enjoy writing much more than I did when I was younger, so I looked for a replacement for Octopress; I found Wyam.</p>
<p>Wyam is appealing to me because it is based on .NET, which is the programming framework I've focused on since it came out forever-and-an-age ago. I've watched it's development now for several months, and like it's underlying design philosophy more than I did that of Octopress. It's simple where it needs to be simple, and complex where it needs to be complex. I like the pipeline architecture, the extensibility, the simple templating model. I like the themes and I like how absolutely simple it is to get up and running.</p>
<p>Only three things took actual time this weekend:</p>
<ol>
<li>Updating my markdown files to work well with Wyam</li>
<li>Deploying the site via Git and Kudu in my Azure Web App</li>
<li>Picking out which theme I wanted to go live with</li>
</ol>
<p>This site has been ported to Wyam, my latest static site generation blogging platform of choice. I am writing this post using markdown with Visual Studio Code, deploying it with Git and Kudu and hosting it with an Azure Web App.</p>
<hr />
<p>This weekend I ported my blog from static content generated by Octopress to the static site generation platform, Wyam. After several years ignoring the site based on the Jekyll derivative, I felt it was time to move to a generator based on technologies that allow me to focus on writing.</p>Joshua Gallhttp://www.imtraum.com/blog/git-tips-and-tricksGit Tips and Tricks2013-09-15T00:00:00Z<p>I've been using Git for quite some time and I've come to really enjoy it. It might be a little weird to say that I enjoy my source control system, but it makes development so much easier in so many ways. Now, I will admit that our relationship wasn't always awesome; working with Git can have a steep learning curve. I've invested a lot of time in learning to use Git, so I've put together a few helpful tips for you.</p>
<h1 id="change-your-text-editor">Change your Text Editor</h1>
<p>I love using Git from the command line, but it's not perfect. If you're developing in Windows, formatting your commit messages from the command line is about as enjoyable as a dead fish in the backseat of your car on a hot and humid summer day. Set Git up to use a graphical text editor and you'll be writing better commit messages in no time.</p>
<p>My text editor of choice is Sublime Text 2, though this will work for any text editor you might be using.</p>
<pre><code class="language-cmd">git config --global core.editor "'C:/Program Files/Sublime Text 2/sublime_text.exe' -w"
</code></pre>
<pre><code class="language-cmd">git config --global core.editor "'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -nosession -noPlugin"
</code></pre>
<p>You are <a href="http://who-t.blogspot.com/2009/12/on-commit-messages.html">writing good commit messages</a> right?</p>
<h1 id="use-a-graphical-merge-tool">Use a Graphical Merge Tool</h1>
<p>Now that you have a graphical text editor, why not use a diff/merge tool that you'll be productive with? There are a lot of options here, and today I'm using DiffMerge.</p>
<p>When I set up my diff/merge tool I tend to just edit my .gitconfig file directly since it's faster for me. If you want to add all this crap in via individual commands in Git that's cool, but my recommendation is to save some time and just edit this by hand.</p>
<pre><code class="language-cmd">[diff]
tool = diffmerge
guitool = diffmerge
[difftool]
keepBackup = false
trustExitCode = true
[difftool "diffmerge"]
name = DiffMerge
path = \"C:/Program Files/SourceGear/Common/DiffMerge/sgdm.exe\"
cmd = sgdm --nosplash \"$LOCAL\" \"$REMOTE\"
[merge]
tool = diffmerge
guitool = diffmerge
[mergetool]
prompt = false
keepBackup = false
keepTemporaries = false
trustExitCode = true
[mergetool "diffmerge"]
name = DiffMerge
path = \"C:/Program Files/SourceGear/Common/DiffMerge/sgdm.exe\"
cmd = sgdm --nosplash --merge --result=\"$MERGED\" \"$LOCAL\" \"$BASE\" \"$REMOTE\"
</code></pre>
<p>The right Diff/Merge tool makes all the difference when working on a project where you can have conflicts that need addressing.</p>
<p>Take some time and play around with some of the different tools and setting them up to work the way that makes sense for how you work. Today I'm using <a href="http://www.sourcegear.com/diffmerge/">DiffMerge</a> and in the past I've used <a href="http://winmerge.org/">WinMerge</a>; both are great. But, with that in mind, I've been taking a serious look at <a href="http://www.semanticmerge.com/">Semantic Merge</a> and it looks to be really promising.</p>
<h1 id="set-up-some-aliases">Set Up Some Aliases</h1>
<p>[I wrote about aliases last week]({% post_url 2013-09-05-git-aliases-will-set-you-free %}). Make your life easier by finding the most common commands you're using and alias them to something short and sweet.</p>
<h1 id="use-a-network-repository">Use a Network Repository</h1>
<p>Whenever I'm going program something that is more than just a quick snippet, I create a Git repo and stuff my code into it. If it's something that I'm doing professionally or something I want to share with all of you, I push it up to either a public or private repository on <a href="http://www.github.com/">Github</a>. I like this approach because I get cloud backup of my repo and if something happens to my local one, eh, who cares. But not everthing should go out into the wild, and for those projects I will work local and create a bare repository on a network share.</p>
<p>If you think to yourself "why, this can't be a big deal, I'll just copy the repo from my machine to the share and start pushing to it" you'd be wrong! If you do, you'll get the following error message:</p>
<pre><code class="language-cmd">remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require 'git reset --hard' to match
remote: error: the work tree to HEAD.
remote: error:
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error:
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.
</code></pre>
<p>This happens because when you're working on your repository on your machine, you're working in an ititialized repository that has a working directory with the .git sub-directory that contains all your version controlled stuff. When you copy this to your network share, Git becomes a sad panda and won't let you push to it. Fortunately, getting this to work is trivial by simply creating a "bare" repository, which in essence is a Git repository that doesn't have a working directory, only version control information.</p>
<p>To create the bare repository, just navigate to the folder where you want the repository to be created and execute:</p>
<pre><code class="language-cmd">git init --bare
</code></pre>
<p>Once you've done that, setting up the remote is super simple. In this example I'm creating a remote called "JoshNetworkShare" that maps to H:\Repositories\name-of-my-repository:</p>
<pre><code class="language-cmd">git remote add JoshNetworkShare file://H:\Repositories\name-of-my-repository
</code></pre>
<p>Substitute the name of the remote and the path on your network to the bare repository that you created with your values and a whole new world opens up to you.</p>
<h1 id="use-an-established-workflow">Use an Established Workflow</h1>
<p>Git changes how you think about writing software and promotes using workflows to establish a consistent development and source code management process. You might think that the process you used in TFS or Subversion or Perforce or CVS (god help you...) will work, but you won't get the full benefit of Git if you go that route. Git isn't one of those other SCM's, so don't treat it like it is; get yourself a nice, simple Git workflow and move along.</p>
<p>Both personally and professionally I've followed <a href="http://scottchacon.com/2011/08/31/github-flow.html">Scott Chicon's workflow</a>. It works great for me on the large teams at work as well as my own one-man projects. There are others out there but since I firmly believe in shipping working software often, the flexibility of this workflow scales really well. If you don't believe me, <a href="https://github.com/blog/1557-github-flow-in-the-browser">Github uses it</a> for well over 100 develops across all of thier projects and it works well for them.</p>
<p>Regardless, find a workflow that you like and stick to it. The consistency will be invaluable.</p>
<h1 id="delete-unused-branches">Delete Unused Branches</h1>
<p>If you work on repositories that have a large number of fast and furious branches for things like support tickets, bug fixes and whatnot you'll find you have a LOT of branches that have limited value once merged into master. You can clean up this clutter by running 'git branch --list' to get a list of all the branches, and individually run the 'git branch -d [branch name]' command for each branch you want to delete. Sound a bit sketchy? Yeah, it is; you can accidentally delete branches that have running work in them rather than the one you wanted simply by being a touch careless.</p>
<p>Ask me how I know...</p>
<p>Fortunately there is an easier way to check which branches are safe to delete and automate their deletion.</p>
<pre><code class="language-cmd">git branch --merged
</code></pre>
<pre><code class="language-cmd">git branch --no-merged
</code></pre>
<pre><code class="language-cmd">git branch --merged | xargs git branch -d
</code></pre>
<hr />
<p>I've been using Git for quite some time and I've come to really enjoy it. It might be a little weird to say that I enjoy my source control system, but it makes development so much easier in so many ways. Now, I will admit that our relationship wasn't always awesome; working with Git can have a steep learning curve. I've invested a lot of time in learning to use Git, so I've put together a few helpful tips for you.</p>Joshua Gallhttp://www.imtraum.com/blog/git-aliases-will-set-you-freeGit Aliases Will Set You Free2013-09-05T00:00:00Z<p>Using Git from the command line can be both liberating and daunting. Once you appreciate the power that some of the more verbose commands give you, it is easy to get overwhelmed if you're not super comfortable on the command line. When you start digging into the formatting functionality of "log" you'll quickly find yourself looking for a better way to reuse several log formatting options.</p>
<p>For example, one of the more common log formats I use:</p>
<pre><code class="language-cmd">> git log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit --date=relative
</code></pre>
<p>I'm pretty sure I found this somewhere on <a href="http://www.stackoverflow.com">Stack Overflow</a>, and while it makes reading the log nicer, who wants to type that in every time?</p>
<h1 id="oh-aliases-yer-the-best">Oh Aliases, Yer the Best!</h1>
<p>For the log example above, I configured an alias of "lg" that aliases the command with all the formatting so I don't have to type it every time. Now, whenever I want that specific log format, I just type in "git lg" and blamo, a pretty log for me to look at.</p>
<p>Setting up an alias couldn't be easier. For example, to create an alias of "co" that aliases the checkout command you'd type the following into your shell:</p>
<pre><code class="language-cmd">git config alias.co checkout
</code></pre>
<p>And you have an alias; Not terribly complicated. Using it is even simpler, just type in "git co" rather than "git checkout" and you're good to go.</p>
<p>There's a rub though; for long commands that have ticks and quotes and such, I find that I mess up the syntax creating aliases like this more often than not. I find this irritating because I'm impatient and lazy and I don't like fighting to get this type of stuff to work.</p>
<p>So I cheat...</p>
<p>Whenever I need to edit my config file I just edit it directly from my C:\Users\yournamehere directory and in the case of aliases, edit the [alias] section. Not hard, and personally I'd rather edit it this way. Powershell is sweet, but a text editor it is not.</p>
<p>To get you started I've created <a href="https://gist.github.com/SirkleZero/6458110">a gist of a few commands that I use frequently</a>. Start with these, make your own, and search for other aliases to make your command line Git workflow just that much better.</p>
<hr />
<p>Using Git from the command line can be both liberating and daunting. Once you appreciate the power that some of the more verbose commands give you, it is easy to get overwhelmed if you're not super comfortable on the command line. When you start digging into the formatting functionality of "log" you'll quickly find yourself looking for a better way to reuse several log formatting options.</p>Joshua Gallhttp://www.imtraum.com/blog/streamline-git-with-powershellStreamline Git with Powershell2013-04-19T00:00:00Z<p>You've <a href="http://git-scm.org">installed the latest version of Git</a>, read some tutorials and have been merrily branching, commiting and having a jolly old time enjoying the freedom that Git gives you. At some point Git "clicked", and you picked up your laptop to show a co-worker how bad ass this new thing you did was; It was probably when you did your first reintegration merge without angering the gods.</p>
<p>Yet, with all the power that Git bestows, you're still missing out. Let's face it, Git is cool. But if you're using the default command line environment on Windows, it feels a bit... meh.</p>
<p>So what are we missing? Consider for a moment...</p>
<ul>
<li>Out of the box, you can use Git Bash or <a href="http://en.wikipedia.org/wiki/Command_Prompt">cmd.exe</a> (not great options imo)</li>
<li>You have no "real" ability to customize cmd.exe</li>
<li>Scripting is horrible (by today's standards)</li>
<li>Out of the box you can use Powershell, but without customization you're really only getting a nicer shell</li>
</ul>
<p>To get past the limitations of the default command line experience, we're going to set a few things up that make <a href="http://en.wikipedia.org/wiki/Windows_PowerShell">Powershell</a> shine when working with Git.</p>
<h1 id="install-git">Install Git</h1>
<p>If you already have Git installed and have been using it via the command line, you can likely skip this entire step. If not, and this is your first foray into using Git from the command line, take a moment to install Git so that it will play nice with Powershell. <a href="http://git-scm.com/download/win">Download the latest version of git for windows</a> and follow the installation process. As you go, make sure that you select "Run Git from the Windows Command Prompt" so your PATH is updated and Powershell knows what "git" means.</p>
<p>While you're at it, you'll also want to tell Git to use OpenSSH for SSH (more on this later).</p>
<p>For reference, this is what I selected when upgrading to the latest version of Git.</p>
<p><img src="http://cdn.imtraum.com/blog/images/install-git1.png" class="img-fluid" alt="Select the Git components to install" title="Select the Git components to install" />
<img src="http://cdn.imtraum.com/blog/images/install-git2.png" class="img-fluid" alt="Set your path to run Git from the Windows command prompt" title="Set your path to run Git from the Windows command prompt" />
<img src="http://cdn.imtraum.com/blog/images/install-git3.png" class="img-fluid" alt="Configure line endings to check out windows style, but commit unix style" title="Configure line endings to check out windows style, but commit unix style" />
<img src="http://cdn.imtraum.com/blog/images/install-git6.png" class="img-fluid" alt="Watch Git install... very exciting" title="Watch Git install... very exciting" /></p>
<p>Now that you have Git installed, let's install Posh-Git, Console2, and set up credential caching.</p>
<h1 id="install-posh-git">Install Posh-Git</h1>
<p>If you take a close look at the command prompt at the top of this post, you'll notice something interesting; an extended, dynamic command prompt that looks something like...</p>
<p><img src="http://cdn.imtraum.com/blog/images/posh-git-prompt.png" class="img-fluid" alt="Posh-Git command prompt with unstaged changes" title="Posh-Git command prompt with unstaged changes" /></p>
<p>In this example, the prompt shows us that we're on the Master branch, we have no new files, one changed file, and no deleted files and that these files are not staged for commit. If I were to add this change to the staging area (with git add), I'll get:</p>
<p><img src="http://cdn.imtraum.com/blog/images/posh-git-prompt-staged.png" class="img-fluid" alt="Posh-Git command prompt with staged changes" title="Posh-Git command prompt with staged changes" /></p>
<p>Notice that we have the same information being displayed; the name of the branch and number of added, modified and deleted files, but in this case Posh-Git shows this information in green to show that this these files are now in the staging area and ready to be commited.</p>
<p>The installation of Posh-Git couldn't be easier, and I highly recommend first <a href="http://psget.net/">installing PsGet</a> and using it to install the latest version of Posh. Think of PsGet as a package manager for Powershell modules, and you'll have a good idea of what it can do for you.</p>
<p>To get started, open your Powershell environment and run the following command:</p>
<pre><code class="language-cmd">(new-object Net.WebClient).DownloadString("http://psget.net/GetPsGet.ps1") | iex
</code></pre>
<p>Once PsGet tells you that it's finished installing, you'll install the latest version of Posh-Git by executing this command:</p>
<pre><code class="language-cmd">Install-Module Posh-Git –force
</code></pre>
<p>Note that adding the <code>-force</code> argument to the <code>Install-Module</code> command tells PsGet to either install Posh-Git or update it to the latest version. As a matter of habit, I always add <code>-force</code> to update Posh to the latest version.</p>
<p>If you'd like to check out how Posh-Git works, you can always perform a source code installation by forking or cloning the repository from Github and <a href="https://github.com/dahlbyk/posh-git">following the instructions to install Posh-Git</a>. Otherwise, stick with the PsGet install and you'll be good to go.</p>
<p>Now that you've got a bad ass custom Posh-Git powered Powershell prompt (10 times fast anyone?), let's round out the shell enhancements with Console2.</p>
<h1 id="install-console2">Install Console2</h1>
<p>Console2 is a nifty little program that wraps Powershell, cmd.exe or any other shell you may use in a nice, shiny, configurable package. I was introduced to it several years back by <a href="http://www.hanselman.com/">Scott Hanselman</a> and I have used it every day since. <a href="http://www.hanselman.com/blog/Console2ABetterWindowsCommandPrompt.aspx">Scott has an excellent writeup on how he configures Console2</a>, which influenced to a very strong degree the configuration that I use today. Follow his directions, and tweak as you see fit.</p>
<p>Now that our shell is in excellent shape, let's turn our attention to caching those pesky credentials.</p>
<h1 id="cache-your-ssh-keys">Cache Your SSH Keys</h1>
<p>After you installed Posh-Git and restarted your Powershell environment, you probably got a warning that looked something like:</p>
<p><img src="http://cdn.imtraum.com/blog/images/could-not-find-ssh-agent-warning.png" class="img-fluid" alt="Could not find ssh-agent" title="Could not find ssh-agent" /></p>
<p>The default profile for Posh-Git is trying to start ssh-agent.exe, but our shell environment doesn't know where to go to start it. To fix this, open the profile (mine is located at C:\Users\jgall\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1) with your favorite text editor, and add the following to the top of the file:</p>
<pre><code class="language-cmd">$env:path += ";" + (Get-Item "Env:ProgramFiles(x86)").Value + "\Git\bin"
</code></pre>
<p>When you're done, your profile should look very similar to this:</p>
<p><img src="http://cdn.imtraum.com/blog/images/add-ssh-agent-to-powershell-profile.png" class="img-fluid" alt="Git Powershell Profile" title="Git Powershell Profile" /></p>
<p>Where the line 1 adds ssh-agents' path to the Powershell PATH environment variable, and line 4 points to the location of the Posh-Git default profile. In my case, the machine that I created this screen grab from installed Posh-Git by using source code, before I knew about the PsGet option (thanks <a href="http://www.haacked.com/">Phil Haack</a>)</p>
<p>Finally, you'll want to <a href="https://help.github.com/articles/generating-ssh-keys/">create an SSH key</a> and add it to your Github/Bitbucket/Whatever account.</p>
<p>Now that you've got ssh-agent running and an SSH key configured, close and re-open Powershell or execute the Powershell command `. $PROFILE' and you should get a prompt that looks something like this:</p>
<p><img src="http://cdn.imtraum.com/blog/images/ssh-agent-enter-passphrase.png" class="img-fluid" alt="ssh-agent - enter passphrase" title="ssh-agent - enter passphrase" /></p>
<h1 id="cache-your-https-credentials">Cache Your HTTPS Credentials</h1>
<p>If you're using HTTPS to communicate with your remotes, you're probably tired of entering your password all the time. <a href="http://gitcredentialstore.codeplex.com/releases/view/103679">Let's install credential helper</a> to cache your credentials so you only have to enter it once when you launch powershell.</p>
<p>After you install this utility, send <a href="http://vibrantcode.com/">Andrew Nurse</a> a note and thank him for writing credential helper.</p>
<h1 id="write-some-scripts">Write Some Scripts</h1>
<p>Now that your Git Powershell environment is all pimped out, write (or find) some scripts to automate the more tedious Git things you find yourself doing. For example, I'm super paranoid about having the latest version of everything in my private corporate Github repositories on my local machine, so I wrote a simple script that will iterate the repositories for a given folder and fetch the latest source.</p>
<pre><code class="language-powershell">$global:GitAllSettings = New-Object PSObject -Property @{
FolderForegroundColor = [ConsoleColor]::Cyan
}
function git-all()
{
$s = $global:GitAllSettings
dir -r -i .git -fo | % {
pushd $_.fullname
cd ..
write-host -fore $s.FolderForegroundColor (get-location).Path
git-fetchall
popd
}
}
function git-fetchall()
{
$remotes = git remote
if($remotes){
$remotes | foreach {
Write-Host 'Fetching from' $_
git fetch $_
}
}else{
Write-Host 'No remotes for this repository'
}
git status
}
[System.Reflection.Assembly]::LoadWithPartialName("System.Diagnostics")
$sw = new-object system.diagnostics.stopwatch
$sw.Start()
git-all
$sw.Stop()
Write-Host "Completed in " $sw.Elapsed
</code></pre>
<p>Put this script somewhere on your machine and run it in the folder where your Git repositories are. While you're off getting fresh coffee, it will iterate each repository it finds and attempt to Fetch from each remote you have configured. Sweet huh?</p>
<hr />
<p>You've <a href="http://git-scm.org">installed the latest version of Git</a>, read some tutorials and have been merrily branching, commiting and having a jolly old time enjoying the freedom that Git gives you. At some point Git "clicked", and you picked up your laptop to show a co-worker how bad ass this new thing you did was; It was probably when you did your first reintegration merge without angering the gods.</p>Joshua Gallhttp://www.imtraum.com/blog/cracking-visual-source-safe-passwordsCracking Visual Source Safe Passwords2012-06-05T00:00:00Z<h1 id="vss-repository.really.really">VSS Repository... really?... REALLY?!</h1>
<p>Several months ago I was approached by a project manager asking for help with some truly legacy code; stuff that was so old it existed only in an ancient source safe database that hadn'tseen the light of day in years. Unfortunately no one had (or remembered) credentials to get in and this project update wasn't going away any time soon. We needed a solution to crack the database and get access to our client's code. Needless to say, this was one of the last projects I wanted to work on.</p>
<p>VSS is known to have a crappy security implementation, so I took it as a challenge to write my own password cracker based on some sparse interwebs info, and my own penetration testing. Needless to say, the rumors of the lax security model implemented in VSS are true. This started out with a client need, and ended up turning into a fun little security project.</p>
<p>The goals that I set for myself were pretty simple:</p>
<ol>
<li>Leverage a simple command line interface</li>
<li>Crack the passwords for one or more users</li>
<li>Exploit VSS custom hash functionality</li>
<li>Export the cracked passwords to a text file</li>
</ol>
<h1 id="you.poor.bastard">You. Poor. Bastard.</h1>
<p>The now aptly named <a href="https://github.com/SirkleZero/you-poor-bastard">You Poor Bastard</a> is pretty straight-forward. The following command via CMD/Powershell</p>
<pre><code class="language-cmd">.\YouPoorBastard.exe -p "C:\SomeDataFolder\YourVSSDirectory" -e "C:\SomeDataFolder\Export.txt"
</code></pre>
<p>Will iterate all the users in the database and export the resulting passwords to a tab separated file.</p>
<p>You can also generate passwords for a single user and print them to the screen rather than a file. It's not the fanciest thing I've ever written, but when you need a tool like this, you really need it.</p>
<p>Drop on over to <a href="http://www.github.com">github</a> and <a href="https://github.com/SirkleZero/you-poor-bastard">check it out</a>! With any luck, you'll never need to actually use this utility.</p>
<hr />
<p>Several months ago I was approached by a project manager asking for help with some truly legacy code; stuff that was so old it existed only in an ancient source safe database that hadn'tseen the light of day in years. Unfortunately no one had (or remembered) credentials to get in and this project update wasn't going away any time soon. We needed a solution to crack the database and get access to our client's code. Needless to say, this was one of the last projects I wanted to work on.</p>Joshua Gallhttp://www.imtraum.com/blog/setting-up-a-home-nasSetting up a Home NAS2011-07-05T00:00:00Z<p>There was a time when I was more than happy to sit inside on a warm summer weekend with some spare computer hardware and a healthy amount of nerdy ambition. Those days were filled with triumph, frustration and more than a little swearing as a finger would be neatly splayed open by the sharp metal edge of a computer case.</p>
<p>Then one day I woke up and realized I had a lot of computers. By “a lot”, I mean more than even a developer really needs. Enough that it was noticeable on my electric bill, my home office sounded like an airport and was perpetually 10 degrees warmer than any other room in my house. Anyone who has this many machines knows the sense of loss when power goes out long enough for the UPS's to die and all your little fans blow their last breath. The silence is deafening.</p>
<p>Shortly thereafter I decided I needed a home network that wasn't a part time job to maintain and set out to simplify things. First thing to go? The mail server; replaced by <a href="https://www.google.com/a/cpanel/domain/new">Google Apps</a> and my own domain. Next were the web and database servers; replaced by virtualization powered by <a href="http://www.vmware.com/products/workstation/">VMWare Workstation</a>. Now I’m down to a single computer and with the exception of the file server, my beloved <a href="http://www.alienware.com/">Alienware M15x</a> happily performs the duties of my old hardware, albeit in a much different way.</p>
<h1 id="dont-die-on-me">Don’t Die on Me!</h1>
<p>Several months go by. Actually more like 18 months, and I start to get paranoid about my lack of data redundancy. At this point I’m like most people; all my stuff is on the single hard-drive in my laptop and a second eSATA drive that I leave permanently connected. With the disturbing drive failure rates I’ve seen both personally and professionally, I felt it was time to prevent the suffering that would come from catastrophic drive loss and set up a more survivable data storage system.</p>
<p>Enter the NAS or “Network Attached Storage” Device and a real backup strategy for all of my data.</p>
<h1 id="pick-your-poison">Pick your Poison</h1>
<p>I spent a lot of time looking at the many storage options available to me. After a lot of research I considered the following:</p>
<ul>
<li>Custom built Linux based NAS</li>
<li>Microsoft's <a href="http://www.microsoft.com/windows/products/winfamily/windowshomeserver/default.mspx">Windows Home Server</a></li>
<li>Drobo's <a href="http://drobo.com/products/drobo-fs.php">Drobo FS</a></li>
<li>Synology's <a href="http://www.synology.com/us/products/DS1511+/index.php">DS1515+</a></li>
<li>NetGear’s <a href="http://www.netgear.com/home/products/storage/advanced-prosumer/RNDU6000.aspx">ReadyNas Ultra 6</a></li>
</ul>
<p>While I eventually decided to go with the Synology DS1515+, let me be fair; Each of the options I considered would have met my needs. Like anything though, the devil is in the details and while I’d love to provide a full run down of each device and its pros/cons, and who it might be good for, I’m just not that ambitious. Instead, it’s more valuable to give a review of the Synology DS1515+ and the features that made me choose it over the others.</p>
<h2 id="capacity-and-redundancy">Capacity and Redundancy</h2>
<p>Going into this project I had pretty substantial home storage requirements. I wanted to be able to store my critical data, backups of both my and my wife's laptop, virtual machines as well as ISO’s of my MSDN subscription. Capacity planning (IMHO), even for home use is critical because of the investment in hardware. Underestimate and you’ll have a system that won’t hold all your data or won’t grow with you. Over estimate and you’re wasting money. Personally I’d rather spend more money up front and have greater future capacity, so I decided to err on the side of more storage than less. With photography as a new hobby, I think this is a wise decision.</p>
<p>The DS1515+ in this regard is awesome. Most of the systems I looked at will take 5 disks (6 for the Netgear) and support all the major RAID options that you’d need. Where the Synology device shines however is in it’s ability to expand by adding up to 2 more disk enclosures via the eSATA ports on the back of the device. Considering this configuration could support up to 45TB of total disk (which I will never use, but is cool nonetheless) I’ve got options should I run out of space.</p>
<p>Photography and HD video eats disk space… fast.</p>
<h2 id="user-interface">User Interface</h2>
<p>I am lazy when it comes to running machines at home. Gone are the days of farting around with hardware, figuring out how I want to set Linux or Window up and all those things I used to really get a kick out of. Now I’m much more prone to explore photography than the inner workings of infinite menus. So it’s no surprise that the easier a device is to use, the more I’m willing to embrace it. I’m not dumb; I’ll find the tricksy button setting thing and flip the switch. I just don’t want to have to dig for it.</p>
<p>So unlike my <a href="http://www.dlink.com/DIR-655">D-Link DIR-655 Router</a> which has one of the better web based administration tools I’ve seen on a consumer grade router, the Synology device is intuitive to use. Not to say that the router is hard, it’s just that there are at least 3 or 4 places that I can go to change different wireless settings, most of them two or three obscure clicks into a menu system that I can never remember how to get to the two times a year I log into the thing. Again, I’m lazy that way.</p>
<p>The Synology DSM (Disk Station Manager) is more like an operating system and less like a web site. When you log into the device with your browser, you essentially have the look and feel of a Linux’y GUI operating system in your browser. It’s easy to find what you want, you can drag and layer windows, and all those nice things that make using a modern graphical operating system easy to use. They put a lot of thought into how their system works, and it shows.</p>
<p>For more detailed information, check out some of the <a href="http://www.synology.com/us/products/features/user_interface.php">features and screenshots</a> on the Synology site.</p>
<h2 id="backing-up">Backing Up</h2>
<p>I've got four or five virtual machines, my host workstation, and my wife's workstation and I don't want to spend hours on end making sure things are crazy backed up. The backup strategy that I'm using is pretty straight forward, and works well with a home NAS.</p>
<ol>
<li>Virtual machines are copied from my workstation to the NAS device on a nightly basis using <a href="http://www.2brightsparks.com/syncback">SyncBackSE</a></li>
<li>My host workstation is backed up to a shared backup folder on the NAS using <a href="http://www.acronis.com/homecomputing/products/trueimage/">Acronis True Image</a></li>
<li>My wife's workstation is backed up to a shared backup folder on the NAS using <a href="http://www.acronis.com/homecomputing/products/trueimage/">Acronis True Image</a></li>
</ol>
<p>To back up the NAS there are two processes that I run</p>
<ol>
<li>Every few hours the DSM built-in backup software backs up the NAS contents to a direct-attached external USB drive</li>
<li>Once per week I create an off-site backup that I keep at work by using <a href="http://www.2brightsparks.com/syncback">SyncBackSE</a> to copy the contents of the NAS to a compressed and <a href="http://www.truecrypt.org/">TrueCrypt</a> encrypted external USB drive attached to my workstation</li>
</ol>
<p>In an ideal world I'd just run TrueCrypt on the Synology and encrypt everything written to the USB drives. This would allow me to have only a single step for creating a backup that I can use with two drives for off-site backups. I know it's possible to compile TrueCrypt for the Sonology, but I have yet to get it to work properly; IMO it's one hell of a convoluted process.</p>
<h1 id="summing-it-up">Summing it Up</h1>
<p>I've had the <a href="http://www.synology.com/us/products/DS1511+/index.php">Synology DS1515+</a> for some time now and it's one of the best improvements I've made to my home network. I am sure the other solutions I looked at are fine and good, but I am very happy with my choice of the Synology device.</p>
<hr />
<p>There was a time when I was more than happy to sit inside on a warm summer weekend with some spare computer hardware and a healthy amount of nerdy ambition. Those days were filled with triumph, frustration and more than a little swearing as a finger would be neatly splayed open by the sharp metal edge of a computer case.</p>Joshua Gallhttp://www.imtraum.com/blog/paging-with-linqIEnumerable<T> Paging with LINQ2010-01-26T00:00:00Z<p>I am a big fan of LINQ to Objects, especially when it comes to working with collections.</p>
<p>There are a lot of samples that illustrate how to use Skip() and Take() to locate a page of data in a collection, however most examples leverage inline code. While I appreciate the simplicity of demonstrating code in this manner, I think it promotes bad programming practices. Rather than find a concept and integrate it intelligently into their application, most developers will plagerize the sample and move on their merry way without a thought of how the code "should" be integrated.</p>
<p>A method that I use (but by no means the only one) to integrate LINQ pagination into your application is to use an extension method. This is an elegant approach because it encapsulates the pagination logic in a method that extends the functionality of IEnumerable<T>, and exposes paging functionality to all IEnumerable<T> collections in your application.</p>
<p>The extension method:</p>
<pre><code class="language-C#">public static IEnumerable GetPage(this IEnumerable source, int page, int recordsPerPage, out double totalPages)
{
if (recordsPerPage >== 0)
{
throw new ArgumentOutOfRangeException("recordsPerPage", recordsPerPage, string.Format("recordsPerPage must have a value greater than zero. The value you provided was {0}", recordsPerPage));
}
// get the first record ordinal position
int skip = (page - 1) * recordsPerPage;
// get the records per page
var totalRecords = source.Count();
// get the total number of pages
var tp = totalRecords / (double)recordsPerPage;
totalPages = Math.Ceiling(tp);
return source.Skip(skip).Take(recordsPerPage);
}
</code></pre>
<p>Now, if you have a collection, perhaps something like...</p>
<pre><code class="language-C#">List names = new List();
names.Add("josh");
names.Add("penelope");
names.Add("linda");
names.Add("lauren");
names.Add("amy");
names.Add("cullen");
names.Add("kevin");
names.Add("john");
</code></pre>
<p>You can easily query the collection for a page of data like so...</p>
<pre><code class="language-C#">double totalPages;
var pageOfNames = names.GetPage(1, 3, out totalPages);
</code></pre>
<p>The extension method extended the functionality of the List<T> (names) with a method that will get a specific page of data from the collection. While the LINQ query itself is simple, this is far more reusable than having paging code all over your app.</p>
<p>Enjoy!</p>
<hr />
<p>I am a big fan of LINQ to Objects, especially when it comes to working with collections.</p>Joshua Gallhttp://www.imtraum.com/blog/jquery-fadein-fadeout-cleartypejQuery fadeIn / fadeOut vs. IE ClearType Rendering2010-01-21T00:00:00Z<p>jQuery makes fading html elements trivial, and ever day I see JavaScript fade in/out effects used all over the web. I've used this UI trick on a few of the sites I've been working with lately and like everyone, I've experienced the frustrating jagged text issue in IE. You know what I'm talking about if you use Facebook and have posted a comment on someone's wall post. After you make the comment it will show up inline in typical AJAX fashion and its font will look like it was rendered on a Commodore 64. There are <a href="http://stackoverflow.com/questions/457929/jquery-toggle-function-rendering-weird-text-in-ie-losing-cleartype">a few posts like this one</a> that address the issue with jQuery and CSS modifications, however I've not had much luck using this method and it's not very flexible. Fortunately there is an alternative commonly used by flash developers that I've found to be easier to implement, more reliable and most importantly works seamlessly across all modern browsers.</p>
<p>Let's start with some content that we want to fade in and out. In this case we'll make a simple widget that displays a series of rotating banners to your site visitors.</p>
<p>First, we'll create two container div elements to hold our banners.</p>
<pre><code class="language-HTML"><div id="outerContainer" style="background:#F00;height:175px;width:215px;">
<div id="bannerContainer" style="position:relative;"></div>
</div>
</code></pre>
<p>You'll notice that I've set the bannerContainer position to "relative" and provided a fixed size (height and width) to the outerContainer div. Both will be very important later on.</p>
<p>Next, lets add the content that we want to fade in and out. In this case, div's that will hold our banners.</p>
<pre><code class="language-HTML"><div id="outerContainer" style="background: #F00;height:175px;width:215px;">
<div id="bannerContainer" style="position:relative;">
<div id="banner1" style="display:none;">
<p>Hey, click here and buy our stuff!</p>
</div>
<div id="banner2" style="display:none;">
<p>Clicky Clicky!</p>
</div>
<div id="banner3" style="display:none;">
<p>Clicking here is good for you!</p>
</div>
</div>
</div>
</code></pre>
<p>Here I've set the banner div style display property to "none" to give a starting point where all the banners are hidden; we'll dynamically pick the first banner to display with a bit of jQuery code.</p>
<p>Finally, we're going to add the thing that will make all of this work. I call it, "the hat".</p>
<pre><code class="language-HTML"><div id="outerContainer" style="background: #F00;height:175px;width:215px;">
<div id="bannerContainer" style="position:relative;">
<div id="banner1" style="display:none;">
<p>Hey, click here and buy our stuff!</p>
</div>
<div id="banner2" style="display:none;">
<p>Clicky Clicky!</p>
</div>
<div id="banner3" style="display:none;">
<p>Clicking here is good for you!</p>
</div>
<div id="bannerHat" style="position:absolute;background:#00F;top:0;left:0;height:175px;width:215px;display:none;"></div>
</div>
</div>
</code></pre>
<p>bannerHat is a child of bannerContainer who's style is set to position: absolute, top: 0 and left: 0, and display: none. Additionally the bannerHat has the same display style (size, background, etc) as outerContainer. This setup allows the bannerHat to be used to fade the banners without encountering the IE text fading issues.</p>
<p>The JavaScript that makes this work is very straight forward, but is a bit "backward" compared to the process of fading an item without the hat method. The process looks like this (pseudo code):</p>
<ol>
<li>(First time only) jQuery.Show() one of the banners, it doesn't matter which one.</li>
<li>After a period of time jQuery.fadeIn() the bannerHat div. This will hide the banner with a nice fading out effect.</li>
<li>jQuery.Hide() the currently "visible" but hidden div.</li>
<li>jQuery.Show() one of the other banners.</li>
<li>jQuery.fadeOut() the bannerHat div. This will show the banner with a nice fading in effect.</li>
<li>Wash, rinse, repeat.</li>
</ol>
<p>So, let's look at the JavaScript that will give us a random, rotating banner widget.</p>
<pre><code class="language-javascript">$(document).ready(function() {
var options = {
startIndex: 0,
fadeInDuration: 1000,
fadeOutDuration: 1000,
rotationInterval: 10000,
startDelay: 0
};
var rotator = function(containerSelector, hatSelector, options) {
var children = $(containerSelector).children();
var startIndex = options.startIndex;
var totalItems = children.length - 1; // this is so that we exclude the hat (the -1 part)
$(children[startIndex]).show();
setTimeout(function() {
setInterval(function() {
$(hatSelector).fadeIn(options.fadeOutDuration, function() {
$(children[startIndex]).hide(function() {
// determine the actual start index by adding one, then taking the modulus
// this will give us the correct start index regardless of the number
// of items we have to work with.
startIndex = (startIndex + 1) % totalItems;
$(children[startIndex]).show(function() {
$(hatSelector).fadeOut(options.fadeInDuration);
});
});
});
}, options.rotationInterval);
}, options.startDelay);
};
// randomly pick a start index and then initiate the rotation
options.startIndex = Math.floor(Math.random() * 3);
rotator('#bannerContainer', '#bannerHat', options);
});
</code></pre>
<p>The JavaScript is pretty self-explanatory and shows how to use the hat to fade the banners in and out without directly fading the banners themselves. This circumvents the issue with cleartype fonts when fading text in IE, works great in all the major browsers, and should be easy to adapt to any situation where you need to fade textual elements with JavaScript.</p>
<p>So now there shouldn't be any excuses for terrible fading in IE (yeah, I'm lookin at you Facebook...).</p>
<hr />
<p>jQuery makes fading html elements trivial, and ever day I see JavaScript fade in/out effects used all over the web. I've used this UI trick on a few of the sites I've been working with lately and like everyone, I've experienced the frustrating jagged text issue in IE. You know what I'm talking about if you use Facebook and have posted a comment on someone's wall post. After you make the comment it will show up inline in typical AJAX fashion and its font will look like it was rendered on a Commodore 64. There are <a href="http://stackoverflow.com/questions/457929/jquery-toggle-function-rendering-weird-text-in-ie-losing-cleartype">a few posts like this one</a> that address the issue with jQuery and CSS modifications, however I've not had much luck using this method and it's not very flexible. Fortunately there is an alternative commonly used by flash developers that I've found to be easier to implement, more reliable and most importantly works seamlessly across all modern browsers.</p>Joshua Gallhttp://www.imtraum.com/blog/alienware-m15x-laptopAlienware m15x Laptop2010-01-18T00:00:00Z<p>I ordered a new laptop from Alienware last Monday and found over the weekend that I'll be getting it over a week early! I'm really excited for it to arrive since it will replace my extremely aged Dell Inspiron 8600 and decouple me from my home built gaming workstation, and it will be nice to have a single machine for gaming and programming.</p>
<p>My Alienware M15x Configuration</p>
<ul>
<li>Intel Core i7 720QM 1.6GHz (2.8 GHz Turbo Mode, 8MB Cache)</li>
<li>4GB Dual Channel DDR3 at 1333MHz 2 x 2048MB</li>
<li>15.6-inch Wide FHD 1920x1080 (1080p) WLED</li>
<li>1GB NVIDIA GeForce GTX 260M</li>
<li>500GB SATAII 7,200RPM</li>
<li>Genuine Windows 7 Ultimate, 64bit, English</li>
<li>Primary - 6-cell (56Watt) Lithium-Ion Battery</li>
</ul>
<p>I'm not a believer in cutting edge hardware, so I usually spec my machines with the highest end componentry at the point where the price starts to dramatically increase, but isn't out of hand. I find that you get the best bang for your buck when buying your hardware with this in mind. I could have spec'd the machine with the highest end stuff available, and it would have cost around $3600 without any extended warranty or accidental coverage. Instead my laptop came in at $2100.00 for the hardware configuration, $2600.00 ish when you factor in the 3 year extended warranty with accidental damage coverage. Since I am somehow prone to having other people wreck my stuff, I figure I can't go wrong with the accidental coverage...</p>
<p>The processor is a good example of getting good bang for the buck. This was only a $100.00 upgrade from the stock processor, while the next one up cost $400.00 and only gave you a nominal core speed increase. Considering the applications that I run are not CPU bound, this was a choice that saved quite a bit of money and still gave me a fast quad core processor that can be upgraded at a later date if I find the machine lacking in horsepower.</p>
<p>I decided on 4GB of memory rather than my initial instinct to max the box out at 8GB. While more RAM would be great, I generally don't find myself running out of memory. When I do, the machine will take 8GB and upgrading should be fairly inexpensive in a year or two. Going with 4GB saved me $300.00 over the 8GB upgrade.</p>
<p>Finally, I elected to stay with the stock 6 cell battery rather than upgrade to the 9 cell version. After reading reviews I found that the 9 cell only gave about 90 minutes of battery life; a moderate 30 minute increase over the stock 6 cell battery. For a $100.00 upgrade, this just didn't seem like that great of a deal considering this laptop will rarely find itself in a starbucks or any other scenario where I don't have access to power. I'll plug the box in when in my living room; it will be faster that way anyway.</p>
<p>I'll do a more detailed write up after I get this thing set up and get some time to poke around. I can't wait!</p>
<hr />
<p>I ordered a new laptop from Alienware last Monday and found over the weekend that I'll be getting it over a week early! I'm really excited for it to arrive since it will replace my extremely aged Dell Inspiron 8600 and decouple me from my home built gaming workstation, and it will be nice to have a single machine for gaming and programming.</p>Joshua Gallhttp://www.imtraum.com/blog/you-think-your-data-is-safeYou Think Your Data is Safe?2009-11-03T00:00:00Z<p>While working on some application security requirements for a client, I came across this little nugget about cracking pgp passwords using a cloud.</p>
<p><a href="http://news.electricalchemy.net/2009/10/cracking-passwords-in-cloud.html">Cracking Passwords in the Cloud</a></p>
<p>It's interesting to see how easy it has becomes to brute force passwords using distributed computing. While brute force attacking passwords for the average person is still time-prohibitive, even with a cloud, the ability to reduce password cracking times from years to weeks is impressive. Cracking performance will continue to increase as distributed computing becomes cheaper, faster and more widely adopted.</p>
<p>Take a look at the article and think of how this could affect your data: <a href="http://news.electricalchemy.net/2009/10/cracking-passwords-in-cloud.html">Cracking Passwords in the Cloud</a></p>
<p>If you haven't started to really focus on application data security, perhaps you should...</p>
<hr />
<p>While working on some application security requirements for a client, I came across this little nugget about cracking pgp passwords using a cloud.</p>Joshua Gallhttp://www.imtraum.com/blog/handling-unhandled-aspnet-exceptionsHandling Unhandled ASP.NET Exceptions2008-03-17T00:00:00Z<p>At some point in their career everyone who creates ASP.NET applications has had issues with their site throwing exceptions that aren't trapped. We end up displaying a friendly error page at best, and at worst display the yellow ASP.NET error screen of death. Generally speaking it's best practice to configure your ASP.NET application to use friendly error pages so that your users aren't presented with an exception and stack trace that are meaningless to them.</p>
<p>But how do you find out what exceptions are being thrown by users who are not you? Over the years I've created many versions of the same type of code to handle these situations, so last night I created a project on codeplex that I'm calling <a href="http://www.codeplex.com/sigh">sigh.net</a>. Essentially, <a href="http://www.codeplex.com/sigh">sigh.net</a> is a provider based unhandled exception handler for ASP.NET applications. You can download the source at <a href="http://www.codeplex.com/sigh">http://www.codeplex.com/sigh</a>.</p>
<p>It's extremely simple to use and doesn't require you to change or add any code to your application. I currently have an email provider created and am in the process of creating a SQL database provider.</p>
<h1 id="update">Update</h1>
<p>It seems that codeplex killed this project. It might have something to do with the fact that I'd not updated anything with it in ages, or maybe I missed an email telling me to do something. Regardless, this project, at least for now, is dead.</p>
<hr />
<p>At some point in their career everyone who creates ASP.NET applications has had issues with their site throwing exceptions that aren't trapped. We end up displaying a friendly error page at best, and at worst display the yellow ASP.NET error screen of death. Generally speaking it's best practice to configure your ASP.NET application to use friendly error pages so that your users aren't presented with an exception and stack trace that are meaningless to them.</p>Joshua Gallhttp://www.imtraum.com/blog/systemnullable-vs-tryparse-revisitedSystem.Nullable vs. TryParse Revisited2008-03-01T00:00:00Z<p>As a follow-up to my <a href="/blog/systemnullable-vs-tryparse">system.nullable vs. tryparse</a> post, I've posted the source code for the nullable parser. I use this code in nearly all of my projects to help with parsing nullable values. It makes parsing nullable objects much easier, and mirrors the functionality of the existing TryParse methods that the framework has made us accustomed to.</p>
<p><a href="http://cdn.imtraum.com/blog/code/NullableParser.txt">Download the C# file</a></p>
<hr />
<p>As a follow-up to my <a href="/blog/systemnullable-vs-tryparse">system.nullable vs. tryparse</a> post, I've posted the source code for the nullable parser. I use this code in nearly all of my projects to help with parsing nullable values. It makes parsing nullable objects much easier, and mirrors the functionality of the existing TryParse methods that the framework has made us accustomed to.</p>Joshua Gallhttp://www.imtraum.com/blog/writing-an-installer-class-for-a-visual-studionet-addinWriting an Installer Class for a Visual Studio Addin2008-02-22T00:00:00Z<p>You've struggled through understanding commands and toolbars, pulled your hair out deciphering confusing API's, and scarred your neighborhood with red-faced screams of frustration as you debug unhandled exceptions that crash visual studio. After weeks, months or yes even years polishing an addin you still aren't finished until you create an installer program.</p>
<p>The installer projects available in visual studio make creating an installer a trivial affair; however there are a few things to consider when making an installer for an addin project.</p>
<h1 id="the.addin-file">The .addin file</h1>
<p>When you create a new addin project in visual studio 2005 or 2008 you will have an xml file in the root directory of your application named YourProjectName.addin. Opening this file will show you the base structure of the xml that among other things defines the location of your addin assembly. The trick of installing the addin via an installer project is placing this file in an appropriate location on the user's workstation as well as modifying the xml to point to the location where your addin is installed.</p>
<p>Addins for visual studio 2005 and 2008 are installed by placing the .addin file in one of several locations on a developer's workstation. These locations are managed through visual studio under the Tools > Options > Environment > Add-in/Macro Security options page. Here you will find a list of paths that visual studio will investigate when it loads to locate various addins that you may have installed.</p>
<p>I caution you on modifying this list; most addins that are written will be installed in one of the default paths specified by here. If you change these default paths you could easily lose access to any addins that you already have installed.</p>
<h1 id="where-to-put-your.addin-file">Where to put your .addin file</h1>
<p>For my addin, <a href="http://www.dotnetsavant.com">.netSavant</a>, I use the following line of code during my installation process to generate a file path that the .addin file will be saved to:</p>
<pre><code class="language-C#">Path.Combine(Environment.GetEnvironmentVariable("ALLUSERSPROFILE"), @"Application Data\Microsoft\MSEnvShared\Addins\DotNetSavant.AddIn");
</code></pre>
<p>This will generate a string value that looks like</p>
<pre><code class="language-cmd">> C:\Documents and Settings\All Users\Application Data\Microsoft\MSEnvShared\Addins\DotNetSavant.Addin
</code></pre>
<p>(on my laptop). Again, there are other locations that you could save the file to though I've found this one to be the most reliable as far as it's existence on developer workstations, and being configured in visual studio (its included in vs.net by default).</p>
<p>One complexity to think of; If your user is installing on a non-english version of windows, I'm honestly not sure how to best compose the file save path. I suppose I'll have to deal with this eventuality eventually. Until then I'm going to live in my quiet and warm happy place.</p>
<h1 id="creating-the-installer-class">Creating the Installer Class</h1>
<p>As with any installer project, if you want to perform some kind of custom installation action you'll need to create an Installer class somewhere in your application. This is as simple as creating a new class file and having it inherit from System.Configuration.Installer. You'll need to decorate your new class with the RunInstaller(true) attribute and override the Install, Uninstall, and Rollback methods. These methods will be used to install your .addin file, remove your .addin file when your addin is uninstalled and roll back any installation tasks that you've performed if there is an error during the installation process.</p>
<h1 id="overriding-install">Overriding Install</h1>
<p>The most complex method of the installation process, this is where you'll create your .addin file and save it to the path that I've mentioned above.</p>
<p>For <a href="http://www.dotnetsavant.com">.netSavant</a> I store my .addin file in the root directory of my project and set its build action to "Embedded Resource". This way I can make simple adjustments to the file and it will always be available as a resource of the assembly. This comes in handy during the installation process, specifically the Install method because I can access the embedded resource, modify the path defined in the Assembly nodes and write the xml contents to disk.</p>
<p>Since the release of vs.net 2008 I've updated my own installer to use a block of LINQ that looks something like this:</p>
<pre><code class="language-C#">XDocument linqXmlDocument = null;
string installationPath = base.Context.Parameters["AssemblyPath"];
string addinResourceFile = Assembly.GetExecutingAssembly().GetName().Name + ".DotNetSavant.addin";
using (Stream resourceStream = Assembly.GetExecutingAssembly().GetManifestResourceStream(addinResourceFile)) {
linqXmlDocument = XDocument.Load(XmlReader.Create(resourceStream));
}
var query = from assemblyNode in linqXmlDocument.Descendants()
where assemblyNode.Name.LocalName.ToLower().Equals("assembly")
select assemblyNode;
foreach (XElement assemblyNode in query) {
assemblyNode.SetValue(installationPath);
}
string addinXml = linqXmlDocument.ToString();
</code></pre>
<p>Note the use of base.Context.Parameters["assemblypath"]. The base installer object gives you access to these values to access form information collected during the installation process. In this case I'm accessing the path that the user specified as the installation directory for the addin.</p>
<h1 id="overriding-uninstall">Overriding Uninstall</h1>
<p>When uninstalling your addin, it would be prudent to delete the .addin file from the user's file system. Its just common sense that when you uninstall something you should remove everything that was added to the users computer during the installation process.</p>
<h1 id="overriding-rollback">Overriding Rollback</h1>
<p>The rollback method is where you will implement any clean up code that will be run if the installation process needs to terminate before its completion. Be aware that you'll not know at what point in the installation process this method might be invoked by your installer package, so any cleanup that you perform should be wrapped inside of try blocks so that the rollback method can execute cleanly.</p>
<h1 id="back-to-the-installer-project">Back to the Installer Project</h1>
<p>Now that you have an installer class it's a simple matter to wire it up to your installer. Make sure that your installer references either the build output of your project or the compiled assembly of your project directly. Then, right click on the installer project and select View > Custom Actions. You'll see four folder icons labeled Install, Commit, Rollback and Uninstall. Simply right click on a folder, click "add custom action" and browse to the assembly that contains your installer class. Do this once each for the Install, Rollback and Uninstall folders to end up with a configuration that looks something like this:</p>
<p><img src="http://cdn.imtraum.com/blog/images/installer-customactions.png" class="img-fluid" alt="Installer - Custom Actions" title="Installer - Custom Actions" /></p>
<p>At this point the installer project is configured to use the custom installation process defined by the installer class. When the installer is run the .addin file will be successfully written to a legitimate addin path as part of the installation process. Conversely it will be removed from the file system during uninstallation.</p>
<h1 id="potential-issues">Potential Issues</h1>
<p>I hinted earlier that non-english users might have issues installing the addin using my methodology. This is because I hard-code part of the .addin installation path in English during my install process. It will be impossible to guarantee that the paths specified by default in visual studio will always be in English, thus we have a potential localization problem to deal with. Remember that if you don't see your addin in the addin manager in visual studio you most likely saved your .addin file to a path that is not recognized by the development environment.</p>
<p>Another common issue that is easy to overlook; make sure that you've specified a valid path to your assembly in your .addin files' Assembly node. If this path is incorrect visual studio will not load your addin.</p>
<hr />
<p>You've struggled through understanding commands and toolbars, pulled your hair out deciphering confusing API's, and scarred your neighborhood with red-faced screams of frustration as you debug unhandled exceptions that crash visual studio. After weeks, months or yes even years polishing an addin you still aren't finished until you create an installer program.</p>Joshua Gallhttp://www.imtraum.com/blog/visual-studio-options-pagesVisual Studio Options Pages2008-02-15T00:00:00Z<p>Like many addin developers I create options pages in visual studio to handle configuration of my software. While extremely simple to create these pages, it is not obvious how they should be configured to load with visual studio. Additionally, the documentation for creating options pages provided by Microsoft describes in detail how to create options pages, though neglects to describe how to make visual studio recognize your options page control.</p>
<p>Fortunately this is extremly simple to wire up.</p>
<pre><code class="language-xml"><?xml version="1.0" encoding="utf-16" standalone="no"?>
<extensibility xmlns="<a href="http://schemas.microsoft.com/AutomationExtensibility" mce_href="http://schemas.microsoft.com/AutomationExtensibility">http://schemas.microsoft.com/AutomationExtensibility</a>">
<hostApplication>
<name>Microsoft Visual Studio</name>
<version>9.0</version>
</hostApplication>
<hostApplication>
<name>Microsoft Visual Studio</name>
<version>8.0</version>
</hostApplication>
<addin>
<friendlyName>.netSavant</friendlyName>
<description>.netSavant is a powerful code generator for the .net framework and Visual Studio.NET. For more information please visit <a href="http://www.dotnetsavant.com/" mce_href="http://www.dotnetsavant.com/">http://www.dotnetsavant.com</a> </description>
<aboutBoxDetails>For more information about .netSavant please visit <a href="http://www.dotnetsavant.com/" mce_href="http://www.dotnetsavant.com/">http://www.dotnetsavant.com</a> </aboutBoxDetails>
<aboutIconData></aboutIconData>
<assembly>Path to .netSavant assembly</assembly>
<fullClassName>DotNetSavant.Connect</fullClassName>
<loadBehavior>1</loadBehavior>
<commandPreload>1</commandPreload>
<commandLineSafe>0</commandLineSafe>
</addin>
<toolsOptionsPage>
<category Name="DotNetSavant">
<subCategory Name="Adapter Generation">
<assembly>Path to .netSavant assembly</assembly>
<fullClassName>DotNetSavant.Controls.AdapterSettingsOptionsPage</fullClassName>
</subCategory>
</category>
</toolsOptionsPage>
</extensibility>
</code></pre>
<p>You'll notice that I added a ToolsOptionPage node under the root Extensibility node of a standard .addin xml file. Here you can identify hierarchal categories that will be displayed as visual studio options pages.In this example I've added a top level options page called "DotNetSavant" with a sub category named "Adapter Generation". Similar to the addin node of the base .addin file, both Category and SubCategory nodes can specify a single Assembly and FullClassName node. Set the assembly nodes' inner text to the path of the assembly where your options pages control exists and then specify the fully qualified name of your options pages class in the FullClassName node and your options pages will load with visual studio.</p>
<hr />
<p>Like many addin developers I create options pages in visual studio to handle configuration of my software. While extremely simple to create these pages, it is not obvious how they should be configured to load with visual studio. Additionally, the documentation for creating options pages provided by Microsoft describes in detail how to create options pages, though neglects to describe how to make visual studio recognize your options page control.</p>Joshua Gallhttp://www.imtraum.com/blog/unhandled-addin-exceptions-vs-visual-studionetUnhandled Addin Exceptions vs. Visual Studio2008-02-09T00:00:00Z<p>One of the most frustrating things I've found when programming addins for visual studio is the inability to globally trap unhandled exceptions the way that you can when authoring a windows application. Essentially visual studio intercepts exceptions your addin throws that you neglect to handle. The worst part is that you don't get any information about the exception before visual studio crashes! Its great that Microsoft gets a dump of the crash, but you're left standing empty handed and scratching your head.</p>
<p>Generally speaking an addin framework should not allow an addin to crash the host application. It points to either a design concept that I can't comprehend, a significant oversight in the design of the visual studio addin framework, or simply some kind of interop issue that visual studio doesn't handle well.</p>
<p>Short of making sure that every block of code in your addin that can throw an exception is wrapped in a try block we don't have a good methodology for dealing with unhandled exceptions. While this is a good thing because it forces you to write solid code it can be exceptionally frustrating to to debug unforeseen issues in a production environment.</p>
<p>I'm still looking for a good way to trap or instrument unhandled exceptions in addins that I write, especially in runtime environments. If I find a good approach I'll be sure to write about it.</p>
<hr />
<p>One of the most frustrating things I've found when programming addins for visual studio is the inability to globally trap unhandled exceptions the way that you can when authoring a windows application. Essentially visual studio intercepts exceptions your addin throws that you neglect to handle. The worst part is that you don't get any information about the exception before visual studio crashes! Its great that Microsoft gets a dump of the crash, but you're left standing empty handed and scratching your head.</p>Joshua Gallhttp://www.imtraum.com/blog/visual-studionet-2008-extension-methodsVisual Studio 2008 Extension Methods2008-01-08T00:00:00Z<p>Now that visual studio.net 2008 has been released developers have a much improved development environment and framework to produce high quality code with. Extension methods are one of the new framework and IDE features that provides a powerful and clever method of extending objects that you do not have source code for or otherwise can't directly extend.</p>
<p>Simply put, extension methods allow you to add new methods to the public contract of an existing type without sub-classing, decorating or recompiling the original type. Prior to this release there were a few options available to solve this problem.</p>
<h1 id="the-decorator-pattern">The Decorator Pattern</h1>
<p>The decorator is a great design pattern for extending the functionality of an object without sub-classing. Without going into the nitty gritty details of decorator implementation, essentially you create an object that takes the object that you want to decorate as an argument to your new objects constructor. Next, you'd create whatever methods you need in your object, accessing the "decorated" object as needed.</p>
<p>Using this pattern, you can extended the functionality of an existing object without subclasing or resorting to libraries of static methods. Unfortunately this pattern is overkill for minor object extensions or use with value types. I commonly use this pattern to create read-only versions of objects without having to change the functionality of the underlying object. Another great example are the various stream objects in the System.IO namespace.</p>
<h1 id="subclassing">Subclassing</h1>
<p>This option is self explainatory. Simply create an object that inherits from the object you want to extend and add additional functionality. Unfortunately if your object to extend is sealed you'll need to use a decorator or the dreaded "helper".</p>
<h1 id="the-helper-object">The "Helper" Object</h1>
<p>Nearly every piece of software that I've ever supported has one or fifty of these objects. I've seen string helpers, integer helpers, business object helpers, User Interface helpers, and ADO.NET helpers to name just a few.</p>
<p>Personally I don't care for this technique, though it can be extremely effective for minor object extension or use with value types. It is susceptible to abuse through massive overloading, lack of commenting and documentation and poor overall design.</p>
<h1 id="enter-the-extension-method">Enter the Extension Method</h1>
<p>Essentially, these methods are similar to the static methods used in the helper object with one very important distinction; they exist as methods defined as part of the contract of the defined type.</p>
<pre><code class="language-C#">public static class Extensions {
public static bool InRange(this T value, T lower, T upper) where T : struct, IComparable {
if(value.CompareTo(lower) >= 0 && value.CompareTo(upper) <= 0) {
return true;
}
return false;
}
}
</code></pre>
<p>In this example I've illustrated a very powerful generic method that allows a developer to see if a value falls into a range of acceptable values. Because it is generic and constrained by the IComparable<T> interface we have the ability to perform a comparison using the CompareTo(T) method exposed by the interface. Syntactically, this is nearly identical to a standard static method with the exception of the <em>this</em> keyword that decorates the first argument of the method. The <em>this</em> keyword tells the compiler that this method should be added to structs of type T (the generic type identifier).Using this new method is simple.</p>
<pre><code class="language-C#">decimal price = 12.99;
if(price.InRange(1, 49.95)){
Console.WriteLine("The price specified is within the specified range.");
}else{
Console.WriteLine("The price specified is not within the specified range.");
}
</code></pre>
<p>This particular extension method is configured using the decimal type, though it will work for any struct that implements the IClomparable interface. What's important is that visual studio.net 2008 will display this method using intellisense on objects that match both the <em>this</em> (generic) declaration as well as the interface constraint.There are countless applications for extension methods. They provide a more intuitive method of extending objects without resorting to other techniques that might require extensive design consideration.</p>
<hr />
<p>Now that visual studio.net 2008 has been released developers have a much improved development environment and framework to produce high quality code with. Extension methods are one of the new framework and IDE features that provides a powerful and clever method of extending objects that you do not have source code for or otherwise can't directly extend.</p>Joshua Gallhttp://www.imtraum.com/blog/nullable-types-and-adonet-parametersNullable Types and ADO.NET Parameters2008-01-05T00:00:00Z<p>As most people are aware the .NET 2.0 framework supports nullable value types. There are many articles on this topic and a few that address the issues of using nullable types in combination with your ado.net code. However, most of these discuss the issue of using nullable types in combination with the DbDataReader objects, though few address the conflicts that arise when using a nullable type to set or get an ado.net parameter value.</p>
<p>Prior to .net 2.0 you'd run into this issue when attempting to pass a null string to the value of an input parameter. In this case most of us would have written conditional code that looked something like this:</p>
<pre><code class="language-C#">string firstName = null;
if(firstName == null) {
Command.Parameters["FirstName"].Value = DBNull.Value;
} else {
Command.Parameters["FirstName"].Value = firstName;
}
</code></pre>
<p>Or you could have used a ternary operation: </p>
<pre><code class="language-C#">string firstName = null;
Command.Parameters["FirstName"].Value = firstName == null ? (object)DBNull.Value : firstName;
</code></pre>
<p>With the inception of nullable types you now have to perform the same logic for all parameters that will use a nullable type to set its value. This isn't a huge deal really, though it will start to get a bit tedious with stored procedures that have a lot of parameters to assign and downright egregious over the lifecycle of your project. Fortunately there is a little known (and used) operator that we can use to solve this issue cleanly; the null coalescing operator "??". This operator specifies that if the argument on the left evaluates to null then the argument on the right will be substituted.</p>
<pre><code class="language-C#">int? age = null;
Command.Parameters["Age"].Value = age ?? (object)DBNull.Value;
</code></pre>
<p>This code is far more readable than the previous conditional statement. Alternatively you could write a simple method that takes the parameter and the value that you want to assign to it and have the value set using the conditional method above (reusability, huzzah!), but that would either involve an inline method call or a loop over the parameters collection. These approaches while certainly valid aren't nearly as compelling as using the coalescing operator.Unfortunately vb.net programmers don't have a comparable operator and so they'll have to resort to either the conditional, the ternary or the inline method techniques.</p>
<pre><code class="language-vb.net">Dim age As Integer? = Nothing
Command.Parameters("Age").Value = IIf(age Is Nothing, DirectCast(DBNull.Value, Object), age)
</code></pre>
<p>So far I've explained how to handle input parameters, but whats the best way to handle output parameters? Working from my previous post on <a href="/blog/systemnullable-vs-tryparse">system.nullable vs. tryparse</a> we can use the new NullableParser static object and its TryParse methods to set the value of a nullable variable properly from a stored procedure output parameter.</p>
<pre><code class="language-C#">int? age = null;
NullableParser.TryParse(Command.Parameters["Age"].Value.ToString(), out age, true);
</code></pre>
<p>Asside from being easy to read and support, this code provides a robust and efficient method for setting the nullable age variable to the value of the Age output parameter using the familiar T.TryParse(string s, out T result) syntax.</p>
<hr />
<p>As most people are aware the .NET 2.0 framework supports nullable value types. There are many articles on this topic and a few that address the issues of using nullable types in combination with your ado.net code. However, most of these discuss the issue of using nullable types in combination with the DbDataReader objects, though few address the conflicts that arise when using a nullable type to set or get an ado.net parameter value.</p>