tag:blogger.com,1999:blog-84168076448320253492024-02-08T05:12:28.345-08:00Lexical ClosuresUnknownnoreply@blogger.comBlogger26125tag:blogger.com,1999:blog-8416807644832025349.post-46898548441157647992010-10-30T23:30:00.000-07:002010-10-30T23:35:44.332-07:00Emacs and ReSharper keybindings in Visual StudioSince <a href="http://lexicalclosures.blogspot.com/2008/09/who-cares-about-quality-of-text-editor.html">committing to Emacs</a> as the text editor to supplement my primary (integrated) development environment, Visual Studio, Emacs has become an indispensable part of my computing life. I run it on any machine I use on a regular basis and sorely miss it when it's not there. It has lived up to its long standing reputation as an extremely powerful text editor that it is more than just a text editor.<br />
<br />
I use Visual Studio for C# and .NET coding and Emacs for everything else. Emacs functions as my shell environment (via <a href="http://www.gnu.org/software/emacs/manual/html_node/eshell/index.html#Top">Eshell</a> with a major assist from <a href="http://www.cygwin.com/">Cygwin</a>) when working with file systems local or otherwise. Emacs is where writing and editing Python files, shell scripts, config files, todo lists (with <a href="http://orgmode.org/">org-mode</a>), notes, and other tasks too trivial to list all take place. <br />
<br />
Parsing log files, interacting with source control, and <a href="http://stackoverflow.com/questions/299512/how-do-i-connect-to-sql-server-using-emacs/299816#299816">connecting to and running queries on SQL Server</a> have found their way into my Emacs sessions. Also, since editing JavaScript code using Visual Studio 2005 was as mundane and constraining as using Notepad, with no intellisense and no <a href="www.jetbrains.com/resharper/">ReSharper</a> magic, Emacs became a more nurturing environment for manipulating JS code. (At least until making the upgrade to Visual Studio 2010 where JavaScript support is vastly improved thereby displacing Emacs in this one activity.)<br />
<br />
When using Visual Studio for day-to-day coding sessions in C#, Emacs style keybindings are a must for me. While not as precise as <a href="www.vim.org">Vim</a>, they do provide me with greater control over the text than the default keybinding most others use in VS or in any other modern, popular IDE or text editor. <br />
<br />
<a href="www.jetbrains.com/resharper/">ReSharper</a> is what chiefly keeps me relying on Visual Studio to do .NET development. This robust and powerful commercial plug-in for Visual Studio has refactoring and navigation features that are not just superb but they give such control over the code and source files analogous to the power Emacs grants its user with text manipulation.<br />
<br />
Initially, adopting Emacs key strokes for Visual Studio was not going to be easy without allowing peaceful co-existence with <a href="http://www.jetbrains.com/resharper/docs/ReSharper50DefaultKeymap_VS_scheme.pdf">ReSharper's own extensive list of keyboard shortcuts</a>. Conflicts were inevitable. (These conflicts and how to smoothly resolve them will be covered in more detail.)<br />
<br />
Prior to the release of VS2010, the native Emacs keyboard scheme in VS2005 sufficed for coding. To enable it, select on the menu<br />
<pre>Tools -> Options -> Environment -> Keyboard -> Apply the following additional keyboard mapping scheme -> Emacs
</pre>It provides most of the <a href="http://msdn.microsoft.com/en-us/library/ms165528(v=VS.80).aspx">basic motions and text editing commands</a> found in Emacs ranging from deleting all characters from the cursor to the end of the current line via C-k to moving between words using M-f and M-b. Thus, it gave me the same comforting, familiar experience as coding in Emacs itself. <br />
<br />
Unfortunately, upon learning <a href="http://connect.microsoft.com/VisualStudio/feedback/details/465750/emacs-keyboard-mapping-scheme-not-working-in-visual-studio-2010-beta-1">VS2010 would no longer support Emacs keybindings</a>, I needed to find a suitable alternative when upgrading to it from VS2005 (and skipping over Emacs-enabled VS2008). I briefly considered learning VIM either by trying out <a href="http://www.viemu.com/">ViEmu</a>, a commercial VIM emulator for VS, or <a href="http://visualstudiogallery.msdn.microsoft.com/en-us/59ca71b3-a4a3-46ca-8fe1-0e90e3f79329">VsVim</a>, a similar but free extension for VS2010. Fortunately, sanity prevailed and I decided to use <a href="http://www.cam.hi-ho.ne.jp/oishi/indexen.html">XKeymacs</a>, a utility app that applies Emacs style keybindings to <i>any</i> Windows application or program whether it be Word, Outlook, Notepad, Windows Explorer, or cmd.exe. I'd already been using XKeymacs for some time for those other places that I cannot (easily) reach with Emacs itself including composing emails and editing text in web browsers (although I've had some luck with the Firefox add-in, <a href="https://addons.mozilla.org/en-US/firefox/addon/4125/">It's All Text!</a>). XKeymacs is far from perfect but works reasonably well. <br />
<br />
After using XKeymacs in VS2010 for a bit, I realized that it did a better job than the old native version in VS2005. While XKeymacs has its problems, the built-in Emacs VS keybindings were relatively worse and more aggravating to use in comparison. (Again this will be dissected in more detail.)<br />
<br />
Whether you use XKeymacs or the built-in Emacs VS keybindings, I recommend configuring ReSharper to use the "Visual Studio Scheme" and not the "ReSharper 2.x/IDEA scheme" provided in the default install. At first, I did learn and used the (IntelliJ) IDEA scheme when I started with ReSharper mainly because I thought that if I ever were to use the <a href="http://www.jetbrains.com/idea/">IDEA IDE</a> then the transition would be easier. But, prematurely optimizing for something that might never happen is something to be avoided not just when developing software (if I find myself needing to code in Java I'd probably use <a href="http://wiki.eclipse.org/FAQ_How_do_I_switch_to_vi_or_emacs-style_key_bindings%3F">Eclipse with Emacs keybindings</a> or Emacs itself). <br />
<br />
Choosing the VS scheme over ReSharper's allows for Emacs keybindings to have precedence over the ReSharper keyboard shortcuts whenever possible. The rationale is that Emacs keybindings are far more ubiquitous than ReSharper's. Resharper is limited to Visual Studio (and the aforementioned IntelliJ) while Emacs key shortcuts can be found on a multitude of OS platforms, IDEs, text editors, and shell consoles. It is best to stick with the consistency and prevalence of Emacs style key chords rather than with ones from a niche commercial add-on tool. <br />
<br />
Using the Visual Studio scheme for ReSharper does cause problems when working with other team members who use the other ReSharper/IDEA scheme. It's bad enough Emacs keybindings are annoying for someone who sits at my machine and it has not been turned off by the time they type their first character ("Hey, can't I copy? Is Ctrl+C broken? Oh yeah, I forgot, you are using those goofy Emacs keys."). <br />
<br />
What's worse (with deeper implications) is when another teammate, knowing a different set of ReSharper shortcuts than I do, attempts to specify a command for me to run. This is likely to occur when pair programming or during code reviews. For example, they might say type Ctrl + n to find the 'Foo' class they recently refactored and it disrupts the flow before we finally figure out that they meant for me to run 'Go to Type' which I know as Ctrl + t. It's like we're speaking to each other in a foreign language. Not good for fostering fluid team collaboration and communication. (This tooling gap occurs with shell commands too since I am fond of using bash on Windows via Cygwin while others are cmd.exe wizards. Typing 'ls' in cmd.exe instead of 'dir' gets odd looks at times.) <br />
<br />
Shortly after switching to XKeymacs in VS, <a href="http://blogs.msdn.com/b/visualstudio/archive/2010/09/01/emacs-emulation-extension-now-available.aspx">a new Emacs Emulation extension for VS2010 was announced</a> that more or less brings back the same old Emacs keyboard scheme from VS2005 (and Visual Studio 2008.) With this new Emacs extension, I was hoping some of the problems that irritated me in the old one which were subsequently magnified when switching to XKeymacs were fixed. But, with the exception of a few things, these annoyances are either still there or behave worse in some cases. <br />
<br />
Reasons why I prefer XKeymacs over the VS Emacs keybindings in VS2005 and VS2010:<br />
<ul><li>Tabbing and indenting works sanely.</li>
<li>Auto formatting works consistently. For example, curly braces line up properly and blocks of code are smartly indented after hitting ENTER. This is not how it behaves in either VS2005 or VS2010 Emacs scheme.</li>
<li>DELETE button works by simply pressing it and not requiring to also hit CTRL. Seems blasphemous to care about a key that is not fundamental to Emacs and contradicts my efforts to map the keys faithfully but it's dead simple functionality that should work for <i>any</i> Windows application even if ported from another OS platform. Another nicety of XKeymacs is after selecting text, it supports C-d to delete the entire text not just with the conventional kill command C-w. With VS Emacs, you are limited to just C-w. Granted this is how it also functions in Emacs but it is a nice feature that I would not mind if Emacs supported too out-of-the-box (without requiring adding an Elisp command to remap the key in your .emacs file).</li>
<li>Able to overwrite selected text simply by typing and not killing it first (this is the same issue as in the previous item). For example, ReSharper's live templates such as the 'if-statement' code snippet highlights text as contextual placeholders to type custom variable names and values. VS Emacs bindings requires cutting the text via C-w before writing any new text. XKeymacs allows you to just start typing away without that extra step.</li>
<li>Copying/yanking text from other applications to VS works as expected (especially from Emacs itself). I never figured out how to get it to work properly in VS2005 Emacs scheme. I suspect the source of the problem was perhaps VS maintaining its own internal kill ring (clipboard) that did not interact well with the Windows clipboard. Sometimes hitting ESC before pasting would help but even that was not reliable. Far worse, VS2010 does not support <i>any</i> copy and paste from external applications (for example, try copying some text from Notepad and then do C-y in VS. You'll see nothing is pasted other than the last yank/kill from within VS)</li>
<li> ReSharper's <a href="http://www.jetbrains.com/resharper/documentation/help20/OtherEdit/clipboard.html">Multiple Entries Clipboard</a> works. It was buggy with the Emacs emulation in VS2005 and in VS2010 it does not work at all.</li>
<li>Not limited to just the text editor but able to use XKeymacs <i>everywhere</i> in Visual Studio such as in dialog boxes (e.g. ReSharper's Rename and Find/Replace features), in ReSharper's <a href="http://www.jetbrains.com/resharper/features/navigation_search.html">Navigation and Search</a> features (e.g. in 'Go To Type' and 'View Recent Files'), in other window panes (e.g. the Immediate Window and Solution Explorer) and in textboxes on the menubar (e.g. the extended command-line).</li>
<li>Can navigate quickly in Intellisense's dropdown list using C-p and C-n in lieu of the arrows keys (although hitting CTRL for some reason makes the list of values transparent)</li>
<li>Does not disable the graphical menu of all open files and window panes when typing CTRL + TAB. It is completely gone from both VS2005 and VS2010 when using VS Emacs scheme.</li>
</ul>Despite all of this praise, XKeymacs has a few faults. On occasion it can behave flakey requiring either resetting or closing and then re-running the XKeymacs exe. Another problem is sometimes the ALT key will trigger the file menu instead of (or in addition to) the normal key command.<br />
<br />
Regardless, these peculiarities are not enough to keep me from using XKeymacs. However, I use the VS Emacs scheme as fallback whenever XKeymacs acts unbearably goofy. Switching between the two "modes" is fairly quick in my enviroment. XKeymacs has the key command C-q that toggles between disabling and enabling it. In addition, I have created and defined <a href="http://stackoverflow.com/questions/3082836/visual-studio-2008-smoothly-switch-between-emacs-and-default-keybindings/3088302#3088302">a VS macro that can toggle between the Emacs scheme and the "Default" VS scheme</a> via the key command M-q which I bound to the macro. By combining these two commands, hitting C-q M-q (or M-q C-q) allows me to go back and forth when necessary. <br />
<br />
Speaking of macros both XKeymacs and VS Emacs scheme support recording and running macros via the same keys commands used in Emacs proper:<br />
<pre>C-x ( recording a macro
C-x ) stop recording a macro
C-x e run the macro
</pre>XKeymacs' macro feature works in places you would not expect including applications like Notepad, Outlook, and Word. VS Emacs scheme's macro key chords are hooked into Visual Studio's built-in macro functionality. Discovering both to have support was a surprise to me. I use macros in Emacs quite often. It's great to have this powerful functionality with the same key commands I'm already familiar with.<br />
<br />
However, I limit the use of XKeymacs' macros to the simplest of text editing scenarios before turning to alternatives. Sometimes it does behave strangely at which time I might switch over to using VS's macros via Emacs scheme which tends to be a bit more smooth. For more complex editing situations, I'll just open the file in Emacs itself for heavy duty text manipulation. <br />
<br />
<b>Configuring Keybindings in Visual Studio</b><br />
<br />
Now follows a list of conflicts and missing keybindings and how best to resolve them. I will include not just how XKeymacs can be configured in VS with ReSharper but also the built-in Emacs VS schemes for both VS2005 and VS2010 (I assume this applies for VS2008 but no guarantees since I've never used it and can not confirm it can be configured the same.)<br />
<br />
All key combinations are presented in the traditional Emacs style and format. Specifically, "C-" for "CTRL" and "M-" for "ALT". For example, 'Kill to end of line' is shown as <br />
<pre>C-k
</pre>instead of<br />
<pre>CTRL + k
</pre>Also, SHIFT will correspond to the capitalized version of a character such as<br />
<pre>C-K
</pre>instead of<br />
<pre>SHIFT + CTRL + k
</pre>To change any key settings in VS go to: <br />
<pre>Tools -> Options -> Keyboard
</pre>This is where you'll configure keyboard shortcuts. Generally, you should set the value for 'Use new shortcut in:' to be "Text Editor" (and, if necessary, "HTML Editor") since it overrides any other commands with the same pre-existing key shortcuts that are set to be "Global". If no other commands are using the keybindings (this is evident if any values appear in 'Shortcut currently used by:' dropdown list after pressing the desired keys in 'Press shortcut keys:') then it's safe to just use "Global".<br />
<br />
<b>Configuring Keybindings in XKeymacs</b><br />
<br />
When configuring XKeymacs, make certain to select 'devenv.exe' as the specific setting and not the 'Default' value setting. Initially opening XKeymacs' properties always reverts to 'Default' so it is easy to overlook this. <br />
<br />
To configure XKeymacs: <br />
<pre>In Windows System Tray -> right-click XKeymacs icon (or double-click will automatically open 'Properties) ->
Properties -> from dropdown list at top of screen, select 'Microsoft Visual Studio .NET (devenv.exe) ->
select 'Use Specific Setting'
</pre>For the 'devenv.exe' setting, I recommend removing all extraneous key commands that serve as duplicates for the basic Emacs commands. These alternate key strokes might include an extra 'SHIFT' (or 'ALT') key press which could override existing non-Emacs VS commands. For example, the command 'forward-char' is typically just C-f but in XKeymacs it can also be called by Shift + Ctrl + f. (In VS, this command is belongs to 'Find All Files' which I use frequently. I'd rather keep it unchanged since I don't need another (more cumbersome) way to move forward one character.<br />
<br />
To remove these key commands:<br />
<pre>In XKeymacs -> Properties -> change combobox from "Default" to "Microsoft Visual Studio .NET (devenv.exe)" -> Change from 'Use Default Setting' to 'Use Specific Setting' ->
Click 'Advanced' tab -> under 'Category' select "Motion" -> under 'Commands' select "foward-char" ->
under 'Current keys' select "Ctrl+Shift+B" -> click 'Remove' button
</pre>Repeat this for all other commands found under the categories:<br />
<ul><li>Search</li>
<li>Motion</li>
<li>Killing and Deleting</li>
<li>Other</li>
</ul>Commands that have two or more key bindings associated with them should have their key shortcuts using SHIFT removed. There are a lot of them. Once removed, it should free up some default shortcuts for other VS and ReSharper commands that use the same keys.<br />
<br />
<a name="delete-next-character"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#delete-next-character">Delete Next Character: C-d</a></b><br />
<br />
<b><i>XKeymacs Mode</i></b>:<br />
<br />
XKeymacs' command C-d overrides <a href="http://www.jetbrains.com/resharper/features/coding_assistance.html#Duplicate_Line_or_Selection">ReSharper's Duplicate Line or Selection</a> command. Instead, set<br />
<pre>ReSharper.ReSharper_DuplicateText
</pre>to use:<br />
<pre>C-D
</pre>As mentioned, make sure that XKeymacs is configured to no longer use SHIFT for killing forward characters thus making this available for ReSharper's duplicate text command.<br />
<br />
While forward character delete is consistent with Emacs, <i>backward</i> character delete is not only bound to the expected BACKSPACE button as in Emacs, but also bound to C-h. I tend to forget that this exists in XKeymacs since in Emacs proper it is bound to 'Help'. C-h is a better fit for backward delete rather than the infrequently called Help command since it keeps your fingers on <a href="http://en.wikipedia.org/wiki/Touch_typing#Home_row">home row</a>.<br />
<br />
<b><i>VS Emacs Scheme</i></b>:<br />
<br />
When first hitting C-d and then prompted with 'ReSharper Shortcut Conflict' dialog box, I selected "Use Visual Studio commands" but this does not automatically bind to the 'delete next character' command. The reason is that C-d is reserved for other commands in VS including the family of Debug.* commands. Looking at the documented list of VS Emacs shortcuts, C-Delete is what is expected to be used. This is not consistent with Emacs.<br />
<br />
Therefore, manually set the following VS command (using "Text Editor" and not "Global" setting)<br />
<pre>Edit.Delete
</pre>to<br />
<pre>C-d
</pre>Then follow the same steps as XKeymacs mode for reassigning ReSharper's duplicate text command.<br />
<br />
<a name="move-end-of-line"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#move-end-of-line">Move to the end of the line: C-e</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
Some conflicts with existing ReSharper bindings including <a href="http://www.jetbrains.com/resharper/features/code_templates.html#Live_Templates">'Insert Live' template</a>, <a href=""http://www.jetbrains.com/resharper/features/code_templates.html#Surround_With_Templates>'Surround With' template</a>, and <a href="http://www.jetbrains.com/resharper/features/code_formatting.html">Code Cleanup</a>. Some of these I use regularly and some I don't. Here are two I do use:<br />
<br />
Bind 'Surround With' template command<br />
<pre>ReSharper.ReSharper_SurroundWith
</pre>to<br />
<pre>C-M-j
</pre>The above keybindings is borrowed from the ReSharper 2.x/IDEA scheme.<br />
<br />
Bind 'Code Cleanup' command<br />
<pre>ReSharper.ReSharper_CleanupCode
</pre>to<br />
<pre>M-j
</pre>The above keybinding is not based on anything other than to be consistent with the previous 'Surround With' command.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
When hitting <pre>C-e</pre>you'll be prompted by ReSharper to select which scheme to use whether default VS or ReSharper's (IDEA). Select VS scheme and not ReSharper's. This should be sufficient but make sure that:<br />
<pre>Edit.EmacsLineEnd
</pre>is bound to<br />
<pre>C-e
</pre><br />
<a name="kill-region"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#kill-region">Kill region: C-w</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
Bind the VS command:<br />
<pre>Edit.Cut
</pre>to<br />
<pre>C-w
</pre>Oddly, this standard keybinding of Emacs is not part of the VS Emacs emulation. As an alternative, it recommends using Shift + DELETE. Meanwhile, C-w is bound to Edit.SelectCurrentWord which I thought was ReSharper's similar function but it is actually the native one for VS. <br />
<br />
I recommend removing the binding from Edit.SelectCurrentWord and just use ReSharper's equivalent version, 'Extend selection', which already works with C-M-Right Arrow (and conversely 'Shrink Selection' set to C-M-Left Arrow).<br />
<br />
<a name="delete-to-end-of-line"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#delete-to-end-of-line">Delete (kill) from cursor (point) to end of line: C-k</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
Bind the VS command:<br />
<pre>Edit.EmacsDeleteToEOL
</pre>to<br />
<pre>C-k
</pre>No (apparent) conflict with ReSharper, but for some reason did not seem bound at all even though according to the list of VS Emacs key shortcuts it should be. Instead, it is by default bound to a whole class of commands such as Edit.CommentSelection. <br />
<br />
<a name="tab-and-indentation"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#tab-and-indentation">Tab and Indentation: C-i</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
This works as expected. Hitting ENTER automatically indents the next line to match the previous line.<br />
<br />
XKeymacs does not support C-j which <a href="http://www.gnu.org/software/emacs/manual/html_node/emacs/Basic-Indent.html">inserts newline and indents</a> but it does support C-m which is an alternate to ENTER and only inserts newline. C-m works the same as ENTER in Visual Studio meaning it does smart indenting too and has the advantage that it is easier to hit than ENTER. Regardless, I also bound C-j to Edit.Breakline to use for consistency and convenience.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
This is the messy. Tabbing and indenting is one of the weakest and most confusing areas of the VS Emacs scheme. It does not behave how you would expect including interfering with smart indentation. <br />
<br />
In both VS2005 and VS2010 Emacs schemes, if you write a line of code and then hit ENTER, the new line will not automatically indent. Instead, ENTER takes the point (cursor) to the first column of the line. You have to hit TAB <i>again</i> to get it to indent. Otherwise, after you start typing some new code, hitting ";" will automatically indent the line to match the preceding one. Both options are not optimal. <br />
<br />
Instead, you need to get in the habit of using C-j. This key command is bound to <a href="http://stackoverflow.com/questions/3669771/using-emacs-extension-in-visual-studio-2010-disables-auto-indent/3686632#3686632">Edit.EmacsBreakLineIndent' which will insert a new line <i>and</i> indent the line correctly</a>. <br />
<br />
Hitting TAB on an existing line with text behaves strangely. In VS2010, it toggles tabbing if already tabbed. In VS2005, it does nothing- the line remains unchanged. Once a line is indented, you cannot further indent the line by pressing TAB in either versions of Visual Studio.<br />
<br />
To address this issue, I originally bound:<br />
<pre>Edit.IncreaseLineIndent
</pre>to<br />
<pre>C-M-TAB
</pre>and<br />
<pre>Edit.DescreaseLineIndent
</pre>to<br />
<pre>Shift-C-M-TAB
</pre>This was really helpful in JS files in VS2005 since no smart indent supported.<br />
<br />
However, I then saw <a href="http://stackoverflow.com/questions/488638/visual-studio-2008-emacs-mode/489749#489749">this suggested solution to inject additional tabs</a> by using:<br />
<pre>C-q TAB
</pre>It relies on the <a href="http://www.gnu.org/software/emacs/manual/html_node/emacs/Inserting-Text.html">Emacs' quoted insert text functionality</a> that VS Emacs scheme also supports via Edit.EmacsQuotedInsert which is bound to C-q.<br />
<br />
Unfortunately, C-q conflicts with XKeymacs key command to enable and disable itself which I use fairly heavily. Therefore, I assigned C-Q to Edit.EmacsQuotedInsert and removed the extra keybind from XKeymacs. <br />
<pre>XKeymacs -> Under Categories "Other" -> under 'Commands' select "Enable or Disable XKeymacs" -> under 'Current Keys' select Ctrl+ Shift + Q -> click 'Remove'
</pre><br />
Finally, for completeness on all-things-tabbing, I thought I could bind<br />
<pre>C-i
</pre>to<br />
<pre>Edit.Indent
</pre>but that is reserved by 'Incremental Search' in the default, non-Emacs scheme. I just left it "as is" since it is convenient to have when pair programming with Emacs keybindings turned off.<br />
<br />
<a name="delete-spaces-and-tabs-around-point"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#delete-spaces-and-tabs-around-point">Delete spaces and tabs around point M-\</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
Bind the VS command:<br />
<pre>Edit.DeleteHorizontalWhiteSpace
</pre>to<br />
<pre>M-\
</pre>XKeymacs does not support this command but I use it all the time in Emacs. However, it conflicts with ReSharper's 'Go to File Member'. Therefore I re-mapped <br />
<pre>ReSharper.ReSharper_GotoFileMember
</pre>to<br />
<pre>M-|
</pre><b><i>VS Emacs Scheme:</i></b><br />
<br />
Follow the same steps as described for XKeymacs mode.<br />
<br />
<a name="expand-word-in-buffer-as-dynamic-abbrev"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#expand-word-in-buffer-as-dynamic-abbrev">Expand the word in the buffer before point as a dynamic abbrev: M-/</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
Does not support this command so bind Visual Studio Intellisense/AutoComplete command:<br />
<pre>Edit.CompleteWord
</pre>to<br />
<pre>M-/
</pre><b><i>VS Emacs Scheme:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<a name="running-commands-by-name"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#running-commands-by-name">Running commands by name: M-x</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
XKeymacs supports the keybinding M-x for running external commands in cmd.exe shell, but I never found much use for it since it is awkward to use with no visual feedback of what you just ran.<br />
<br />
Therefore, I removed from 'devenv.exe' settings:<br />
<pre>In XKeymacs -> under 'Category' select "Other" -> under 'Commands' select 'execute-extended-command' -> under 'Current keys' select "Meta-X" -> click 'Remove' button
</pre>Then bind the VS command:<br />
<pre>Tools.GoToCommandLine
</pre>to<br />
<pre>M-x
</pre><br />
I use this VS command line feature frequently particularly for TFS source control commands such as <br />
<ul><li>File.TfsCompare</li>
<li>File.TfsUndoCheckout</li>
<li>File.TfsHistory</li>
</ul><br />
and to call other commands from the Application Menu such as<br />
<ul><li>Tools.Options</li>
<li>Tools.AttachProcess</li>
<li>File.CopyFullPath (pasting in other applications like emacs itself)</li>
<li>bl (alias for Debug.Breakpoints)</li>
<li>callstack</li>
<li>View.BookmarkWindow (precedes ReSharper 5.0 bookmark)</li>
</ul><b><i>VS Emacs Scheme:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<a name="incremental-search-forward"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#incremental-search-forward">Incremental search forward: C-s</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
Works as expected hooking into VS's <pre>Edit.IncrementalSearch</pre>although not sure how it is automatically bound to it. It is a different command than the traditional 'Find/Replace' command that XKeymacs usually uses in most other Windows application. Regardless, one less thing to bind manually.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<a name="incremental-search-backward"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#incremental-search-backward">Incremental search backward: C-r</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
N/A. Works <i>unexpectedly</i> just as incremental search forward.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
This is not bound to C-r although C-s is bound to search forward. Instead, the default key shortcut is C-I which is awkward to use when toggling with incremental search forward (C-s). Therefore, mapped the VS command:<br />
<pre>Edit.ReverseIncrementalSearch
</pre>to<br />
<pre>C-r
</pre>C-r conflicts with numerous ReSharper refactor mode commands particularly all of the refactoring commands. However, since I tend to use C-R to get list of contextual refactorings and never was in the habit of calling individual refactoring commands, remapping C-r is preferable. <br />
<br />
<a name="scroll-current-line-center"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#scroll-current-line-center">Scroll the selected window so that the current line is the center-most text line: C-l</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
XKeymacs does support this command but it does not work properly in VS. Instead of centering the screen on the current line, it keeps moving the screen one line down. <br />
<br />
Therefore, I mapped the following the VS command:<br />
<pre>Edit.ScrollLineCenter
</pre>to<br />
<pre>C-l
</pre>and then removed the C-l keybinding from XKeymacs 'devenv.exe' setting by:<br />
<pre>XKeymacs -> Advanced -> under 'Category' select "Motion" -> under 'Commands' select "recenter" -> under 'Current keys:' select "Ctrl + L" -> click 'Remove' button
</pre><b><i>VS Emacs Scheme:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<a name="tranpose-character"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#tranpose-character">Tranpose character: C-t</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
One of the rare circumstances where I chose ReSharper's default key shortcut and ignoring an existing Emacs keybinding. I removed XKeymacs binding of C-t by going into its settings and configuring:<br />
<pre>XKeymacs -> Advanced -> under 'Category' select "Killing and Deleting" -> under 'Commands' select "transpose-chars" -> under 'Current keys:' select "Ctrl + T" -> click 'Remove' button
</pre>I hardly encounter situations where I need to transpose single characters (although words and lines are more common.) On the other hand, I use Resharper's 'Go to Type' command all the time so I make an exception to keep C-t assigned to it.<br />
<br />
Make certain the VS command:<br />
<pre>Resharper.Resharper_GoToType
</pre>is bound to<br />
<pre>C-t
</pre><b><i>VS Emacs Scheme:</i></b><br />
<br />
Follow the same steps as described for XKeymacs mode.<br />
<br />
<a name="numeric-argument"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#numeric-argument">Numeric Argument: C--</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
The VS command for navigating backwards is another situation where a VS keybinding wins over an existing Emacs' keybinding:<br />
<pre>View.NavigateBackward
</pre>is already bound to<br />
<pre>C--
</pre>However, this conflicts with XKeymacs command 'numeric argument -' for passing negative arguments related to the C-u repetition counts. I never use this and if I do then I would call it via M--. So, kept it unchanged in VS and removed the conflict from XKeymacs 'devenv.exe' settings:<br />
<pre>XKeymacs -> Advanced -> under 'Category' select "Other" -> under 'Commands' select "numeric argument -" -> under 'Current keys:' select "Ctrl + -" -> click 'Remove' button
</pre>The 'cycle through mark ring' command in Emacs, C-x C-SPC, reminded me of the VS command, View.NavigateBackward. Although not quite the same as VS's feature, it is similar enough for me to also bind these keybindings for consistency.<br />
<br />
To do this, find the .xkeymacs config file in the XKeymacs directory. <br />
<pre>...\xkeymacs347\etc\English (United States).xkeymacs
</pre>Add the line:<br />
<pre>(fset 'cycle-mark-ring "\Ctrl+-")
</pre>Then configure <br />
<pre>XKeymacs Properties -> select 'devenv.exe' -> 'Advanced' tab -> Category 'Original Command' ->
</pre>you should now see the new command, 'cycle-mark-ring', previously added in the config file in the list of commands. To continue:<br />
<pre>Under 'Press new shortcut key:' -> check 'Ctrl-X' -> in text field below it, press 'Ctrl-SPACEBAR' -> click 'Assign' -> 'Current keys:' list should now include 'Ctrl-X Ctrl-Space'.
</pre>Now in VS, along with C--, C-x C-SPC will also run the navigate backward command when XKeymacs is enabled.<br />
<br />
I initially tried to bind to C-u C-SPC but XKeymacs limits prefixing commands with C-x. However, this is for the best since <i>C-x</i> C-SPC is more appropriate because it cycles through <i>all</i> buffers just as the View.NavigateBackward command navigates through all open documents in Visual Studio. That other command only cycles through ones in the current buffer.<br />
<br />
I generally stick to using C-- since it is less awkward to type and it is consistent and more fluid to use when switching back and forth with View.NavigateForward which is C-_ (a.k.a. Shift+Ctrl+ -)<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
Follow same steps as XKeymacs mode.<br />
<br />
<a name="undo"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#undo">Undo: C-_</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
The complimentary VS command to previous item 'navigate backward' is of 'navigate forward'<br />
<pre>View.NavigateForward
</pre>is bound to<br />
<pre>C-_
</pre><br />
This keybinding conflicts with one of the <i>three</i> existing XKeymacs commands for 'Undo'. Since I always use C-/ for Undo then removed that unnecessary conflicting keybinding:<br />
<pre>XKeymacs -> Advanced -> under 'Category' select "Error Recovery" -> under 'Commands' select "undo" -> under 'Current keys:' select "Ctrl + Shift + -" -> click 'Remove' button
</pre><b><i>VS Emacs Scheme:</i></b><br />
<br />
A conflict exists since under the VS Emacs scheme<br />
<pre>Edit.Undo
</pre>is bound to <br />
<pre>C-_
</pre>Again, since Edit.Undo is also bound to C-/ which is what I only use then remove that binding from the Emacs scheme.<br />
<br />
<a name="delete-blank-lines"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#delete-blank-lines">Delete blank lines: C-x C-o</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
This is not supported in XKeymacs but Visual Studio does have this command. It behaves slightly different in VS than in Emacs. In VS, it removes all blanks lines from the cursor and below. Any blank lines above the current blank lines will be ignored. To remove those, you need to call the delete blank lines command again. In Emacs, the equivalent command removes all of the blank lines below <i>and above</i> but leaves the current blank line. To get rid of this line, you need to repeat the same command again.<br />
<br />
Now, to use the C-x C-o keybindings requires reconfiguring XKeymacs. First, assign in VS:<br />
<pre>Edit.DeleteBlankLines
</pre>to<br />
<pre>M-o
</pre>I would have preferred using C-x C-o but C-x is the common command for Edit.Cut and rather not mess with it to leave it available when pair programming. Therefore, arbitrarily chose M-o as an alternative. <br />
<br />
In XKeymacs, you can map key commands to custom ones to use while running XKeymacs. <br />
<br />
Find the .xkeymacs config file in the XKeymacs directory. For example, mine is found under Program Files:<br />
<pre>C:\Program Files\xkeymacs\xkeymacs347\etc\English (United States).xkeymacs
</pre>Add the line:<br />
<pre>(fset 'delete-blank-lines "\Alt+o")
</pre>Then configure <br />
<pre>XKeymacs Properties -> select 'devenv.exe' -> 'Advanced' tab -> Category 'Original Command' ->
</pre>you should now see the new command, 'delete-blank-lines', previously added in the config file in the list of commands. To continue:<br />
<pre>Under 'Press new shortcut key:' -> check 'Ctrl-X' -> in text field below it, press 'Ctrl-o' -> click 'Assign'
</pre>'Current keys:' list should now include 'Ctrl-X Ctrl-O'. Now in VS, C-x C-o will run delete blank lines command (as well as M-o the original bindings in VS) when XKeymacs is enabled.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<a name=""></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#split-selected-window-into-two-windows">Split the selected window into two windows, one above the other: C-x 2</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
As with the previous item, XKeymacs does not support this but VS does have this command. However, just like before, I'm unable (or unwilling) to bind to C-x 1 since it interferes with Edit.Cut (C-x) in non-Emacs mode. Therefore, need to map the binding in XKeymacs.<br />
<br />
First, in VS make sure:<br />
<pre>Window.Split
</pre>is bound to<br />
<pre>Ctrl + F6
</pre>Then in XKeymacs config file, '.xkeymacs':<br />
<pre>...\xkeymacs347\etc\English (United States).xkeymacs
</pre>add line:<br />
<pre>(fset 'split-window-vertically [?\Ctrl+f6])
</pre>Then configure XKeymacs <br />
<pre>Properties -> select 'devenv.exe' -> 'Advanced' tab -> under Category 'Original Command' ->
</pre>should now see new command, 'split-window-vertically', in the list of commands<br />
<pre>Under 'Press new shortcut key:', check 'Ctrl-X' -> in text field below it, press '2' -> click 'Assign'
</pre>'Current keys:' list should now include 'Ctrl-X 2'. Now in VS, C-x 2 will run 'window split' command (as well as Ctrl + F6 the original bindings in VS).<br />
<br />
Although I don't use splitting windows that often, it is useful on those rare occasions when I need to view one part of a file while editing another part of that same file. <br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<a name="remove-other-windows"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#remove-other-windows">Remove other windows (remove split): C-x 1</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
Continuing from the previous item, we need a way to remove the split window too. In VS, the same command 'Windows.Split' that initially splits the window also removes it. It toggles the two states.<br />
<br />
Follow all of the same steps as descibed in the previous item for window split specifically adding the new mapping in .xkeymacs config file.<br />
<br />
Now go back into XKeymacs to add an additional keybinding for the same 'window split' command: <br />
<pre>XKeymacs Properties -> select 'devenv.exe' -> 'Advanced' tab -> Category 'Original Command' ->
</pre>should now see (new) command, 'split-window-vertically', in the list of commands<br />
<pre>under 'Press new shortcut key:, check 'Ctrl-X' -> in text field below it, press '1' -> click 'Assign' ->
</pre>'Current keys:' list should now include 'Ctrl-X 1' along with 'Ctrl-X 2' added previously. Now in VS, C-x 1 will run 'window split' command (as well as Ctrl + F6 the original bindings in VS) to remove the window split.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<a name="select-another-window"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#select-another-window">Select another window: C-x o</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
This command is available once a window for a file has been split (see the two previous items). It allows one to move the cursor and focus back and forth between the two views.<br />
<br />
In VS, check to see if<br />
<pre>Window.NextSplitPane
</pre>is assign to<br />
<pre>F6
</pre>If not, then set it as F6.<br />
<br />
Then in XKeymacs config file, '.xkeymacs':<br />
<pre>...\xkeymacs347\etc\English (United States).xkeymacs
</pre>add line:<br />
<pre>(fset 'other-window [f6])
</pre><br />
Then configure<br />
<pre>XKeymacs Properties -> select 'devenv.exe' -> 'Advanced' tab -> under Category 'Original Command',
</pre>should now see new command, 'other-window', in the list of commands<br />
<pre>Under 'Press new shortcut key:', check 'Ctrl-X' -> in text field below it, press 'o' -> click 'Assign'
</pre>'Current keys:' list should now include 'Ctrl-X o'. Now in VS with XKeymacs, C-x o will run 'next split pane' command (as well as F6 the original keybinding in VS)<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<a name="go-to-line"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#go-to-line">Go to line: M-g g</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
In VS, check to see if:<br />
<pre>Edit.GoTo
</pre>is assign to<br />
<pre>C-g
</pre>If not, then set it as C-g.<br />
<br />
Then in XKeymacs config file, '.xkeymacs':<br />
<pre>...\xkeymacs347\etc\English (United States).xkeymacs
</pre>add line:<br />
<pre>(fset 'goto-line "\Ctrl+g")
</pre>Then configure<br />
<pre>XKeymacs Properties -> select 'devenv.exe' -> 'Advanced' tab -> under Category 'Original Command' ->
</pre>should now see new command, 'goto-line', in the list of commands<br />
<pre>Under 'Press new shortcut key:' -> in text field below it, press 'Alt + g' -> click 'Assign'
</pre>'Current keys:' list should now include 'Meta + G'.<br />
<br />
However, sometimes this keybinding does not always work. Therefore, assign an alternate keybinding. Go back to<br />
<pre>'Press new shortcut key:' -> in text field below it, press 'Ctrl+Alt+g' -> click 'Assign'
</pre>'Current keys:' list should now include 'Ctrl+Meta+G' along with 'Meta+G' added previously.<br />
<br />
In VS, C-g or C-M-g will run 'goto line' command.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<a name="delete-beginning-line-to-point"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#delete-beginning-line-to-point">Delete from beginning of line to point: M-0 C-k</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
M-0 C-k kills from point (cursor) to the beginning of the current line. It is the opposite command of C-k which kills from point to the <i>end</i> of line. Unlike XKeymacs, the equivalent VS command, Edit.DeleteToBOL, removes all text to the start of the line which includes the non-blank spaces.<br />
<br />
In VS assign:<br />
<pre>Edit.DeleteToBOL
</pre>to <br />
<pre>M-0 C-k
</pre><br />
<a name="open-line-above"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#open-line-above">Open line above: C-M-Enter</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
Emacs does not have a single command to insert a newline above the current line. To perform this requires calling multiple commands: C-a C-o. However, VS has a command, Edit.LineOpenAbove, that inserts a newline above regardless of your location on the current line. This behavior is reminiscent of VIM's 'O' command.<br />
<br />
In VS assign:<br />
<pre>Edit.LineOpenAbove
</pre>to <br />
<pre>C-M-Enter
</pre>VS also has the command, Edit.LineOpenBelow, that inserts a newline <i>below</i> the current one. It is automatically bound to Shift + Ctrl + Enter. This is similar to Emacs' C-o command but does not drag the rest of the text after the point down to the newline. Instead, this behaves like the 'o' command in VIM. Unfortunately, although a handy shorter command, I rarely use this VS command since I am accustomed to doing C-e C-m to achieve the same functionality.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
Follow the same steps as XKeymacs mode.<br />
<br />
<a name="swap-point-and-mark"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#swap-point-and-mark">Swap point and mark: C-x C-x</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
Suprisingly, XKeymacs does not support this command. Therefore, in Visual Studio assign the keybindings to the equivalent 'swap anchor' command:<br />
<pre>Edit.SwapAnchor
</pre>to<br />
<pre>C-k C-a
</pre>Then in XKeymacs config file, '.xkeymacs':<br />
<pre>...\xkeymacs347\etc\English (United States).xkeymacs
</pre>add line:<br />
<pre>(fset 'exchange-point-and-mark "\Ctrl+k\Ctrl+a")
</pre>Then configure<br />
<pre>XKeymacs Properties -> select 'devenv.exe' -> 'Advanced' tab -> under Category 'Original Command' ->
</pre>should now see new command, 'exchange-point-and-mark', in the list of commands<br />
<pre>Under 'Press new shortcut key:', check 'Ctrl-X' -> in text field below it, press 'C-x' -> click 'Assign'
</pre>'Current keys:' list should now include 'Ctrl-X Ctrl-X'.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
N/A. Works as expected.<br />
<br />
<a name="list-buffer"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#list-buffer">List buffer: C-x C-b</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
The ReSharper command 'View Recent Files' (C-,) reminded me of Emacs' list-buffers command. Therefore, thought I might bind Emacs' key shortcuts to it too.<br />
<br />
In XKeymacs config file, '.xkeymacs':<br />
<pre>...\xkeymacs347\etc\English (United States).xkeymacs
</pre>add line:<br />
<pre>(fset 'list-buffers "\Ctrl+,")
</pre>Then configure<br />
<pre>XKeymacs Properties -> select 'devenv.exe' -> 'Advanced' tab -> under Category 'Original Command' ->
</pre>should now see new command, 'list-buffers', in the list of commands<br />
<pre>Under 'Press new shortcut key:', check 'Ctrl-X' -> in text field below it, press 'C-b' -> click 'Assign'
</pre>'Current keys:' list should now include 'Ctrl-X Ctrl-B'.<br />
<br />
This is helpful to those who first learned Emacs before using ReSharper. I still generally use C-, since it is more convenient and I've been using that ReSharper command shortcut for a long time. However, nice to have alternatives there for consistency.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
Follow same steps as XKeymacs mode.<br />
<br />
<a name="back-to-indentation"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#back-to-indentation">Back to indentation: M-m</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
Although C-a does a reliable of job of moving from anywhere on the current line to the first indention (i.e. first non-space character on the line), I am in the habit of using M-m to do that in Emacs. For example, I use it frequently when coding in Python using Emacs (indentation being core to the Python language) so mapping the equivalent VS command to M-m provides further consistency for me.<br />
<br />
In VS, assign:<br />
<pre>Edit.LineStartAfterIndentation
</pre>to<br />
<pre>M-m
</pre><b><i>VS Emacs Scheme:</i></b><br />
<br />
Unlike XKeymacs, C-a does not move the point to the start of the first (non-blank) character in the line but instead moves it to the beginning of the line. Therefore, binding M-m to Edit.LineStartAfterIndentation is extremely useful. So, follow the steps described in XKeymacs mode.<br />
<br />
<a name="join-lines"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/10/emacs-resharper-visual-studio-xkeymacs.html#join-lines">Join Lines: M-^</a></b><br />
<br />
<b><i>XKeymacs Mode:</i></b><br />
<br />
XKeymacs does not have this command nor does Visual Studio. I hadn't really gave much thought about it until answering this question, <a href="http://stackoverflow.com/q/3835523/4872">does Visual Studio 2010 not have a “join lines” keyboard shortcut?</a>, on StackOverflow. Truthfully, I don't find myself needed this when coding in C# with ReSharper but now that I made the effort to create a macro then it will now be included as part of my general keybindings configuration. <br />
<br />
Essentially create a macro named something like "JoinLines" and I recommend using <a href="http://stackoverflow.com/revisions/75d25f33-fee3-4143-bcca-3b2dc52cca44/view-source">the code from my original answer</a> since it behaves more like the Emacs version. The subsequent <a href="http://stackoverflow.com/questions/3835523/does-visual-studio-2010-not-have-a-join-lines-keyboard-shortcut/3835645#3835645">updated answer for joining lines</a> follows more of the VIM equivalent.<br />
<br />
Once the VS macro is created then assign the keybindings:<br />
<pre>Tools -> Options -> Environment -> Keyboard -> in text field for 'Show commands containing:', type "JoinLines" and the new macro command should display.
</pre>then assign this command to<br />
<pre>M-^
</pre>This approach opens the door to creating other VS macros for missing commands in VS and are found in Emacs. Currently, join lines is one so I need to keep this mind next time when I need to port over some Emacs function to VS.<br />
<br />
<b><i>VS Emacs Scheme:</i></b><br />
<br />
Follow same steps as XKeymacs mode.Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-8416807644832025349.post-65011141083830854632010-08-31T22:28:00.000-07:002010-08-31T22:54:09.052-07:00Unit Testing PerilsAdding unit tests to existing code is nowadays applauded for being progressive, almost borderline pedestrian, for most software development shops. This was not the case when I began writing automated unit tests several years ago. It now feels otherworldly to see greater acceptance of unit testing primarily because <a href="http://lexicalclosures.blogspot.com/2009/01/test-driven-blogging.html">I'd greatly toned down my advocacy for it</a>. This shift in attitude includes a (substantial) decrease practicing test driven development (TDD). Unit testing can not simply be applied on blind faith hoping to cure all of one's software ills.<br />
<br />
Not long ago, I and other developers on a .NET software project were retroactively adding unit tests (no TDD) for recently produced C# code. Part of the process involved removing any cruft specifically code that was not being called by any other code. The motivation was to help increase code coverage by removing any untouched lines of code. We relied on the static analysis features of the Visual Studio plugin, <a href="http://www.jetbrains.com/resharper/">ReSharper</a>, to reveal these isolated areas of code. In ReSharper parlance, <a href="http://www.jetbrains.com/resharper/features/navigation_search.html#Find_Usages">Find Usages</a> handles the work of hunting down any dependencies for symbols and functions. <br />
<br />
One of those areas ReSharper indicated that no usages found were for a few simple getter/setter properties of a class. This code was then confidently removed. However, it was later discovered that the removed code <i>did</i> indeed serve a purpose and was providing functionality to one of the GUI screens of the WinForms application. More specifically a few of the columns in a 'DataGridView' control that allowed editing no longer did so. They unexpectedly became read-only. <br />
<br />
The GUI screens had no tests. The discovery was made via manual end-to-end testing. This 'DataGridView' control was bound to the properties of the class. It inferred from the properties whether they were getters, setters, or both. The 'Set' accessor of the properties were naively removed since the code <i>we</i> wrote did not seem to call it. The grid control however was binding to it and passing values to and from the properties. No setter accessors now meant the associated columns had become effectively non-editable.<br />
<br />
Realizing our mistake we rolled back our original edits for that class and cautiously reviewed all other recent refactorings.<br />
<br />
<a href="http://www.joelonsoftware.com/">Joel Spolsky</a> once stated that <a href="https://stackoverflow.fogbugz.com/default.asp?pg=pgWiki&command=view&ixWikiPage=29025">manual testing is all you need in developing quality software and that unit testing provides no notable value </a>. Others taking the inevitable opposing view immediately went on the offensive <a href="http://blog.objectmentor.com/articles/2009/02/06/on-open-letter-to-joel-spolsky-and-jeff-atwood">denouncing his claims.</a> Joel's view is more an over reaction to all the TDD zealots who, whether intentional or not, seem to be deemphasizing the value of old fashion manual testing. Meanwhile, the dissenting voices are manifesting as a fear that their do-no-wrong methodology (and possibly their identity) might possibly amount to nothing. Both views are too extreme and leaning one way or the other can cost you in other areas. You need to <a href="http://lexicalclosures.blogspot.com/2009/01/test-driven-blogging.html">continually find and maintain a balance</a> in testing and not become complacent with whatever approach you take.<br />
<br />
Yes, integration-style tests <i>might</i> have helped in exercising and validating the correct behaviour in the UI but even that might give you a false sense of security. Manual testing does have its virtues. Without it, you might overlook the human elements of UI functionality, design, and usability. Just like it is an incorrect assumption that no compiler errors means your application is fully functional and ready for end-users. <br />
<br />
Also, these were CRUD operations. In my experience they tend to be the least risky part of an application, least likely to be buggy, and the quickest to identify and fix. The cost of writing and <i>maintaining</i> these sorts of unit tests do not warrant their benefit. Is it really worth your time and effort chasing after an unrealistic 100% code coverage? You learn your lesson then you move on (part of continually defining the proper testing balance). Regardless, you should confirm that your seemingly harmless refactoring does not produce unwanted side effects. Unit tests help with that but not by themselves. <br />
<br />
Putting the merits of unit testing aside for a moment, the inability to easily detect how the .NET framework is referencing and interacting with my code can be somewhat irritating. This datagrid binding issue is another one of those features in .NET where it does something behind the scenes on your behalf (<a href="http://en.wiktionary.org/wiki/automagical">automagically</a>!) but it is not clear (at least not from a coding perspective) what and how it is doing it. I'd experienced this before when <a href="http://lexicalclosures.blogspot.com/2009/01/paging-in-aspnet-using-nhiberate.html">trying to implement paging using NHibernate in an ASP.NET GridView control</a> and it was frustrating. Wait until the havoc <a href="http://www.microsoft.com/web/webmatrix/">WebMatrix</a>, <a href="http://www.microsoft.com/visualstudio/en-us/lightswitch">LightSwitch</a>, and friends will unleash on the .NET community.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-85272765543764577612010-07-26T00:25:00.000-07:002010-09-02T22:39:02.098-07:00Debugging JavaScript memory leak in IEI was recently working on one of the most difficult bugs in all of my years of programming. It involved a memory leak in IE8 caused by a couple of ASP.NET pages on a user's machine. <br />
<br />
The first indication of the problem was the iexplore.exe process in the Windows Task Manager. Intermittently watching the 'Mem Usage' column on the 'Processes' tab, the memory usage reached over 2 gigabytes thereby significantly slowly down the user's machine. The rate of increase was about 9 MB every 30 seconds.<br />
<br />
As with most web app bugs in IE, I checked to see if the problem was reproducible in Firefox. No leak. Next, I used two tools to help further identify the leaks: <a href="http://blogs.msdn.com/gpde/pages/javascript-memory-leak-detector-v2.aspx">JavaScript Memory Leak Detector (IEJSLeaksDetector)</a> and <a href="http://home.wanadoo.nl/jsrosman/">sIEve</a>. Surprisingly, neither one indicated any memory leaks. This is when I knew this was not going to be easy troubleshooting much less fixing.<br />
<br />
I tried running the app under IE7 compatibility view mode in IE8 but again no leak. Being skeptical of the reliability of this mode, I decided to try a true install of IE7 itself which required removing IE8 from the user's machine. Alas no leak. I also attempted from a different machine to connect to the web server running on that same user's machine to view the app (yes, one of the web app's features is to run as a local instance). Again no leak.<br />
<br />
The problem web pages perform Ajax callbacks. Looking at the code, they were set to trigger the calls every 30 seconds. This was consistent with what was observed earlier with the rate of increase of memory usage. Naively, I thought could the <a href="http://msdn.microsoft.com/en-us/library/system.web.ui.timer.aspx">ASP.NET Ajax Timer Control</a> be broken? <br />
<br />
I kept suspecting Microsoft's Ajax Timer Control which was helping to display a "Processing..." progress image to the user every 30 seconds. My ill-conceived rationale was that perhaps the code was using an old version of the Timer Control since the app was running on ASP.NET 2.0 and the code was originally written around the time when the <a href="http://en.wikipedia.org/wiki/ASP.NET_AJAX">ASP.NET AJAX framework</a> was originally released. <br />
<br />
Unfortunately, I temporarily fell under the spell of believing <a href="http://www.codinghorror.com/blog/2008/03/the-first-rule-of-programming-its-always-your-fault.html">"Select is broken"</a>. This disillusion became painfully clear when I later confirmed that the specific Ajax framework was indeed the latest ASP.NET 3.5 web extensions for 2.0. Just to be sure, I created a standalone dummy web app with the same library that provides the Timer Control. Once again, as has been the pattern thus far: no memory leak. <br />
<br />
More detailed information was needed. On the suggestion from another developer on the project, configured Task Manager with:<br />
<pre>'Processes' tab -> 'View' -> 'Select Columns'
</pre>and added the following columns:<br />
<pre>'Handles'
'Threads'
'USER Objects'
'GDI Objects'
</pre>Again, closely monitoring any changes, I noted that the <a href="http://en.wikipedia.org/wiki/Graphics_Device_Interface">GDI objects</a> counts were increasing by 7~8 units every 30 seconds meaning this leak must somehow be graphics display related. Perhaps some image (or images) was not being removed or repeatedly added on each Ajax server request? (Possibly the aforementioned progress image?)<br />
<br />
To further investigate the GDI objects findings, I used a few tools. Neither <a href="http://msdn.microsoft.com/en-us/magazine/cc188782.aspx">GDIUsage</a> nor <a href="http://www.fengyuan.com/download.html">GDIObj</a> provided any useful metrics (It's possible I might not have fully understood how to best use these.) However, <a href="http://www.nirsoft.net/utils/gdi_handles.html">GDIView</a> allowed me to do snapshots of the memory usages of the GDI objects. <br />
<br />
I manually copied the initial output displayed in GDIView and pasted that data into a text file. I subsequently waited until the data refreshed 30 seconds later and then copied that output into a separate text file. Finally, I performed a diff on the two files and it reconfirmed what was shown in Task Manager. The following text was the diff results:<br />
<pre>Handle Object Kernel
Type Address Extended Information
---------------------------------------------------------------
0x86040059 Region 0xe2e68008
0x3a0508e1 Bitmap 0xe45af558 Width: 1042, Height: 586 , Bits/Pixel: 32
0x29040972 Region 0xe55938d8
0x3b040bb4 Region 0xe42f9868
0x9a040bd1 Region 0xe1cd5738
0xb3010bec DC 0xe3f9e9c8
0x17300c6a Pen 0xe40dce88 Color: 0x02b0b0b0, Width: 0, Style: 0x00000000
0xe2050fb3 Bitmap 0xe2268840 Width: 1046, Height: 150 , Bits/Pixel: 16
0x133011fb Pen 0xe565c868 Color: 0x02b0b0b0, Width: 0, Style: 0x00000000
0x7004123c Region 0xe5685c18
0x4e0415eb Region 0xe3031688
0xc504172b Region 0xe183cdb8
0x94041733 Region 0xe362f2b8
0xae05180e Bitmap 0xe2c44840 Width: 1046, Height: 607 , Bits/Pixel: 16
0xfe01198b DC 0xe4c22008
0x60011b7d DC 0xe28af008
</pre><br />
Clearly some bitmap was indeed being added repeatedly but the questions now were which one and why? The other developer helping out noted that the width of 1042 and height of 150 match the size of the web app within the browser. An important detail.<br />
<br />
Recalling the purpose of the Ajax timer control was to display on both web pages a processing progress message and image (specifically a gif file), I decided to delve deeper into that area of the code. A 'progress' element containing the gif image is shown to the user via JavaScript. In that same JS function, an IFrame element is created and added to the 'Content' property of the 'progress' element. I noticed that the width and height of this IFrame was set to the document's height and width: <br />
<pre>var progressIFrame = document.createElement("IFRAME");
// ...
progressIFrame.style.width = document.body.clientWidth;
progressIFrame.style.height = document.body.clientHeight;
</pre>This was consistent with the results from the previously listed diff of the memory usage since the bitmap's dimensions were about the same size as the browser's viewport. Then, the very next line in the function struck me as strange:<br />
<pre>progress.Content = progress.parentNode.insertBefore(progressIFrame, progress);
</pre>The return value of 'insertBefore' is a reference to the 'progressIFrame' element which it is then set to the 'progress' element's 'Content' property. Not only does it do this but it also adds it as a node sibling to the very same 'progress' element. <br />
<br />
Why add the IFrame element to the DOM in <i>two</i> places? It was far from clear what the intent was for using the 'insertBefore' function if the IFrame element was <i>already</i> being added as part of the progress element itself?<br />
<br />
Therefore, I changed it to directly set it and not bother with 'insertBefore': <br />
<pre>progress.Content = progressIFrame;
</pre>and that stopped the memory leak!<br />
<br />
Now, admittedly, <a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html">I am not comfortably well-versed in JavaScript</a> and this is one of those cases that I do not fully understand why the IE browser behaved the way it did in reaction to how the JavaScript is written. <br />
<br />
Sometimes this is the outcome of debugging code (particularly when written by someone else). Although you come away learning some things you did not know before, you also might not know how or why something was broken to begin with or why the fix itself even worked.<br />
<br />
Like writing code, debugging code is both a skill and art. However, it is certainly a different mindset.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-39709244753369818412010-06-27T23:21:00.000-07:002010-07-25T02:28:17.368-07:00Project Euler solution # 1I had an interest in doing some programming puzzles for the usual reasons: to practise and improve my logic skills, to explore new programming languages, and for simply to have...fun. <a href="http://www.pythonchallenge.com/">The Python Challenge</a> seemed appropriate when I started learning <a href="http://lexicalclosures.blogspot.com/2008/07/snake-bitten-by-python-rip-nant.html">Python</a>. I enjoyed doing the first few.<br />
<br />
I then took a stab at one of the more popular puzzle websites, <a href="http://projecteuler.net/">Project Euler</a> (pronounced <i>oil</i>-er not <i>you</i>-ler which I incorrectly thought it to be.) This site not only encourages identifying efficient algorithms via your programming language of choice to solve the problems but strongly emphasizes math requiring more and more research as you advance to later problems.<br />
<br />
Starting off with problem # 1:<br />
<block><br />
<a href="http://projecteuler.net/index.php?section=problems&id=1">Add all the natural numbers below one thousand that are multiples of 3 or 5.</a><br />
</block><br />
Here is my first run, brute force attempt at the solution using Python: <br />
<pre>divisors = [3, 5]
multiples = []
for i in range(1, 1000):
del multiples[:]
for divisor in divisors:
if i % float(divisor) == 0:
multiples.append(divisor)
if multiples:
print i,' is a multiple of ',', '.join([str(multiple) for multiple in multiples])
</pre><br />
The earlier code solved the problem but, shortly after churning it out, I discovered that it did not offer the expected format of the answer. Therefore, re-worked the code to do so:<br />
<pre>multiples = []
for i in range(1, 1000):
if i % 3 == 0 or i % 5 == 0:
multiples.append(i)
print sum(multiples)
</pre><br />
While this returns the correct value of 233168, the structure of the code could have some pythonic polish added to it. A dab of <a href="http://lexicalclosures.blogspot.com/2008/09/comprehending-list-comprehensions.html">list comprehensions</a> does the job:<br />
<pre>print sum([x for x in range(1, 1000) if x % 3 == 0 or x % 5 ==0])
</pre><br />
In the spirit of one reason why I'm doing these puzzles, here is a solution in C:<br />
<pre>main()
{
int sumOfMultiples = 0;
int i;
for(i = 1; i < 1000; i++)
{
if (i % 3 == 0 || i % 5 == 0){
sumOfMultiples += i;
}
}
printf ("%i \n", sumOfMultiples);
}
</pre>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-8416807644832025349.post-70606024190081519142010-05-31T00:39:00.000-07:002010-07-25T02:45:20.854-07:00Cross-browser compatibilityMy first true deep dive into heavy <a href="http://lexicalclosures.blogspot.com/2008/10/is-javascript-next-big-language.html">JavaScript</a> programming was working on a project that required adding Firefox support for a web application that only targeted Internet Explorer. The web application was originally written for <a href="http://ie6funeral.com/">IE6</a> and therefore contained some hairy JavaScript. <br />
<br />
I experienced firsthand the same rite of passage as millions of web developers around the world. Fortunately, adding support for Firefox also meant that the web app no longer needed to support IE6 (only IE7+) so fixing the JavaScript to be cross-browser compatible became a bit more reasonable and sane.<br />
<br />
Now, some snippets collected for these changes in no particular order:<br />
<a name="1"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#1"># 1</a></b><br />
<br />
Replacing all references to the DOM Level 1 function <br />
<pre>top.frames['myId']
</pre>, which gave Firefox much trouble, with the DOM Level 2 function:<br />
<pre>document.getByElementId('myId')
</pre><br />
<a name="2"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#2"># 2</a></b><br />
<br />
Replace the function 'removeNode':<br />
<pre>if (x)
{
x.removeNode();
}
</pre>with a conditional check that uses the Firefox friendly function 'removeChild' if parentNode is defined (otherwise fallback on 'removeNode'):<br />
<pre>if (x)
{
if (x.parentNode)
x.parentNode.removeChild(x); // Firefox
else
x.removeNode(); // IE
}
</pre><a name="3"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#3"># 3</a></b><br />
<br />
The 'innerText' property does not work in Firefox but it does in IE. Instead, <a href="http://blog.coderlab.us/2005/09/22/using-the-innertext-property-with-firefox">Firefox does recognize 'textContent'</a> serving the same purpose. Again, use another if-else statement checking if the target element exists:<br />
<pre>function setText(elem, textValue)
{
if (elem.textContent || elem.textContent == "")
{
elem.textContent = textValue; // Firefox
}
else
{
elem.innerText = textValue; // IE
}
}
</pre><a name="4"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#4"># 4</a></b><br />
<br />
Dot notation for defining functions is another area where JavaScript errors emerge in Firefox:<br />
<br />
<a href="http://forums.asp.net/t/1223789.aspx">"...missing ( before formal parameters..."</a><br />
<pre>function window.onDoSomething()
{
// do some stuff
}
</pre>To fix is to swap around the 'function' keyword:<br />
<pre>window.onDoSomething = function()
{
// do some stuff
}
</pre><a name="5"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#5"># 5</a></b><br />
<br />
One of the web pages had a character that used the <a href="http://en.wikipedia.org/wiki/Webdings">Webdings</a> (TrueType dingbat) font for an expressive, functional symbol. It rendered incorrectly (and confusingly) in Firefox. <a href="http://www.alanwood.net/demos/wingdings.html">Substituting the equivalent Unicode character</a> resolved the discrepancy. <br />
<br />
<a name="6"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#6"># 6</a></b><br />
<br />
All AJAX calls involved the following IE6 object:<br />
<pre>var xmlHttp = new ActiveXObject('Microsoft.XMLHTTP');
</pre>As mentioned, with no need for supporting IE6 (something web developers dream of someday being true for all the of internet) every instance of the previous line of code is fully replaced with:<br />
<pre>var xmlHttp = new XMLHttpRequest();
</pre>Certainly, one of the more satisfying cross-browser changes.<br />
<br />
<a name="7"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#7"># 7</a></b><br />
<br />
Firefox does not support <a href="http://stackoverflow.com/questions/2116177/windows-event-is-undefined-javascript-error-in-firefox">referencing global event objects specifically 'window.event'</a> and when encounters JavaScript code that attempts to do so it responds with this error message:<br />
<br />
<blockquote>window.event is not defined.</blockquote><br />
Instead, it is necessary to pass the event object as an argument via a function's parameter:<br />
<pre><code>
function myFunction(e) // <-- add 'e' as a parameter for the global event object
{
if(!e) e = window.event // if e is undefined then set e using IE event object
// other code
}
<button onclick="myFunction(event);">test events</button>
</code></pre><a name="8"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#8"># 8</a></b><br />
<br />
A web page had a file browser functionality to attach a file for upload to the server. The 'onclick' event handler for this element's tag and type <br />
<pre><input type="file" ...
</pre>was coded to be <i>programmatically</i> triggered. The reason for this was to allow for the user to type in and edit the free form text of the file path and then the JavaScript would create the 'input' and then fire the event on the user's behalf.<br />
<br />
Firefox does not allow this requiring the user to manually click on the tag since the file path text is read-only and can not be edited. It is considered <a href="http://www.quirksmode.org/dom/inputfile.html">a potential security flaw</a> hence the restriction. The code needed to be rewritten to have the user directly fire the event and open the file browser. <br />
<br />
<a name="9"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#9"># 9</a></b><br />
<br />
An html table on a page contained rows with hidden nested rows functioning as a tree-like grid. These top level rows when clicked toggled between the style of <br />
<pre>display:none
</pre>when hiding its children rows and then used <br />
<pre>display:block
</pre>when showing the rows. <br />
<br />
In Firefox, the rows do not align properly when made visible with 'block' display style. The premature solution was to replace 'block' with <br />
<pre>display:table-row
</pre>which worked for both IE8 and Firefox. <br />
<br />
However, I later discovered that this type of display was not supported in IE7. Instead, <a href="http://stackoverflow.com/questions/249103/ie7-and-the-css-table-cell-property/645977#645977">substituted the equivalent of <i>no</i> display style at all</a> using empty string <br />
<pre>rowObject.style.display = ''
</pre>to show hidden rows. Apparently, each browser knows by default how to appropriately render the rows without the need to be specific in the html.<br />
<br />
<a name="10"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#10"># 10</a></b><br />
<br />
Consider a deeply nested event firing and then needing to prevent it from triggering other event handlers further up the DOM hierarchy. In IE,<br />
<pre>window.event.cancelbubble
</pre>should take care of this. Firefox, of course, does not recognize this command. Instead, one <a href="https://developer.mozilla.org/en/migrate_apps_from_internet_explorer_to_mozilla#Event_differences">must use</a> the <br />
<pre>event.stopPropagation
</pre>function to exercise the same control over the scope of an event.<br />
<br />
To ensure coverage across the different browsers, <a href="http://www.quirksmode.org/js/events_order.html">do this</a>: <br />
<pre>function doSomething(e)
{
if (!e) var e = window.event;
e.cancelBubble = true;
if (e.stopPropagation) e.stopPropagation();
}
</pre><a name="11"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#11"># 11</a></b><br />
<br />
Some JavaScript code was not executing at all. No clear indications why the function was not defined. It turns out that using the term 'jscript' as part of the 'type' attribute in the 'script' tag:<br />
<pre><script language="javascript" type="text/jscript" ...
</pre>is , not surprisingly, <a href="http://bytes.com/topic/javascript/answers/469367-function-not-defined-javascript-error-firefox#post1804319">recognized only in IE and not in Firefox</a>. All instances of 'jscript' were replaced with 'javascript':<br />
<pre><script type="text/javascript" ...
</pre>This cross-browser issue caused so much grief for such a simple fix. I spent way too much time figuring it out. When I <a href="http://bytes.com/topic/javascript/answers/469367-function-not-defined-javascript-error-firefox#post1804319">read</a>: <br />
<br />
<blockquote>"...Nevermind, I think I found it. I inherited the code and just noticed that the original programmer had specified JScript rather than Javascript as the script language..."</blockquote><br />
I glanced over to my aspx page and my eyes immediately saw that exact error. Unbelievable.<br />
<br />
<a name="12"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#12"># 12</a></b> <br />
<br />
Another function not defined in Firefox but used in IE: <br />
<pre>window.attachEvent
</pre>In Firefox, <a href="https://developer.mozilla.org/en/migrate_apps_from_internet_explorer_to_mozilla#Attach_event_handlers">use this</a> instead:<br />
<pre>window.addEventListener
</pre>The cross-browser code might look like this:<br />
<pre>eventName = 'load';
if (window.addEventListener) // Firefox
{
window.addEventListener(eventName, myFunction, false);
}
else if (window.attachEvent) // IE
{
window.attachEvent('on' + eventName, myFunction);
}
</pre>(Note that IE requires the prefix "on" for the event name while Firefox does not.)<br />
<br />
All of the above applies to IE's <br />
<pre>window.detachEvent
</pre>For Firefox, use<br />
<pre>window.removeEventListener
</pre><br />
<a name="13"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#13"># 13</a></b><br />
<br />
Some image icons when hovering over with the mouse were expected to show tooltip text. However, no tooltips shown in Firefox with the following:<br />
<pre><input type="image" disabled="disabled" text="Hi there"...
</pre>Instead, <a href="http://stackoverflow.com/questions/1660779/simple-asp-net-tooltip-for-firefox">replaced the 'input' element tag with an 'img' element</a>:<br />
<pre><img text="Hi there" src="disabled.gif"
</pre><a name="14"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#14"># 14</a></b><br />
<br />
An html table needed to be dynamically resized by changing it's style. The original IE-only code:<br />
<pre>tableObject.style.left=400
tableObject.style.top=400
</pre>No effect in Firefox (the size remained the same), so needed to <a href="http://www.ozzu.com/programming-forum/change-style-firefox-with-javascript-t64284.html">explicitly add the unit of measurement "px"</a>:<br />
<pre>tableObject.style.left=400 + "px"
tableObject.style.top=400 + "px"
</pre>Also applies to 'height' and 'width':<br />
<pre>tableObject.style.height="55px"
tableObject.style.width="33px"
</pre><br />
<a name="15"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#15"># 15</a></b><br />
Dynamically adding some new html into a page relied on 'insertAdjacentHtml':<br />
<pre>document.body.insertAdjacentHTML('AfterBegin', '<div>foo</div>')
</pre>Worthless in Firefox (<a href="http://dev.w3.org/html5/spec/apis-in-html-documents.html#insertadjacenthtml">at least until HTML5 is supported</a>) so <a href="http://forums.mozillazine.org/viewtopic.php?t=445587&sid=dcc81e4619ceb9f3ee31148ba2293552">fall back on 'insertBefore'</a>:<br />
<pre>elementHtml = '<div>foo</div>';
if (document.body.insertAdjacentHTML)
{
document.body.insertAdjacentHTML('AfterBegin', elementHtml)
}
else
{
element = document.createElement("div");
element.innerHTML = elementHtml;
document.body.parentNode.insertBefore(elem, document.body);
}
</pre>(Orignally, used <pre><code>document.body.insertBefore(element, document.body.childNodes[0])</code></pre>but that seemed to cause the event (specifically the 'onload' event of an image) to fire repeatedly in Firefox so changed it to use the one listed above.)<br />
<br />
<a name="16"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#16"># 16</a></b><br />
<br />
The mouse's position was necessary to figure out where to render a dynamically injected image. In IE, to determine the X and Y coordinates relative to the web page document:<br />
<pre>window.event.clientX + document.body.scrollLeft + document.documentElement.scrollLeft
window.event.clientY + document.body.scrollTop + document.documentElement.scrollTop
</pre>Functions 'client<X|Y>' tell you the "<a href="http://en.wikipedia.org/wiki/Viewport">viewport</a>" position of the mouse which is a smaller, overlapping portion of the entire document but not a true subset of it. To obtain the document's actual mouse position, you need to add to these position values the scroll values by using the other functions shown above. Specifically,<br />
<pre>document.<b><i>body</i></b>.scroll<<i>Left|Top</i>>
</pre>are the older (i.e. quirksmode) DOM syntax to retrieve the scroll values while <br />
<pre>document.<b><i>documentElement</i></b>.scroll<<i>Left|Top</i>>
</pre>are the more modern standard approach. Depending on the browser only one of these will have the actual value while the other will equal zero. Therefore, it's safer and relatively harmless to include both. <br />
<br />
In stark contrast, Firefox simply uses these:<br />
<pre>e.pageX
e.pageY
</pre>For a <a href="http://www.quirksmode.org/js/events_properties.html">comprehensive code snippet that works across most major modern browsers</a>:<br />
<pre>function doSomething(e) {
var posx = 0;
var posy = 0;
if (!e) var e = window.event;
if (e.pageX || e.pageY) {
posx = e.pageX;
posy = e.pageY;
}
else if (e.clientX || e.clientY) {
posx = e.clientX + document.body.scrollLeft
+ document.documentElement.scrollLeft;
posy = e.clientY + document.body.scrollTop
+ document.documentElement.scrollTop;
}
// posx and posy contain the mouse position relative to the document
// Do something with this information
}
</pre><br />
<a name="17"></a><br />
<b><a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#17"># 17</a></b><br />
<br />
To cancel an event just for the local scope only but not <a href="http://lexicalclosures.blogspot.com/2010/05/cross-browser-compatibility-javascript.html#10">stop the event from bubbling up</a> to the rest of DOM tree, in IE set<br />
<pre>e.returnValue
</pre>to false.<br />
<br />
For Firefox, <a href="http://stackoverflow.com/questions/1000597/event-preventdefault-function-not-working-in-ie-any-help">use</a>:<br />
<pre>e.preventDefault
</pre>Cross-browser function:<br />
<pre>if(e.preventDefault)
{
e.preventDefault(); // Firefox
}
else
{
e.returnValue = false; // IE
}
</pre><br />
<b>One last (thought) snippet</b><br />
<br />
While extremely beneficial being exposed to JavaScript's historical client-side scripting messiness in different browsers, next time when faced with cross-browser quirkiness I'd use a library like <a href="http://jquery.com/">jQuery</a> for simplified and easier web development.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-78221666401999630552010-04-30T03:15:00.000-07:002010-09-13T22:16:46.294-07:00Running Mono C# REPL on WindowsBeing a big fan of <a href="http://lexicalclosures.blogspot.com/2008/11/c-interactive-command-line.html">programming language REPLs and command-line consoles</a>, I have awaited one for C# that I can easily install and comfortably use for lightweight validation of C# syntax and for exploring its language features. Originally, <a href="http://www.sliver.com/dotnet/SnippetCompiler/">Snippet Compiler</a> had filled that role and then replaced with <a href="http://www.linqpad.net/">LINQPad</a> but neither one is a REPL.<br />
<br />
Fortunately, <a href="http://mono-project.com/">Mono</a>, the open source project for .NET and C# that offers cross OS platform support especially for Linux and Mac, does have <a href="http://www.mono-project.com/CsharpRepl">an interactive shell (CsharpRepl)</a> for evaluating C# statements and expressions. Since Mono also runs on Windows, I <a href="http://www.go-mono.com/mono-downloads/download.html">downloaded</a> and installed the latest Mono release 2.6 on my Windows development machine.<br />
<br />
Once installed, to run the C# REPL simply:<br />
<br />
<ol><li>Select Start ->; All Programs -> Mono 2.6.1 for Windows -> Mono-2.6.1 Command Prompt</li>
<li>At the command prompt, type "csharp"</li>
</ol><br />
The first step just adds Mono's bin directory to your environment PATH for the current shell session. (Alternatively, you can navigate via the command line to the Mono bin directory and directly run the file, csharp.bat.)<br />
<br />
The C# REPL works if used within the Windows shell, cmd.exe (a.k.a. "command prompt") but with some caveats. The first immediate one is the command prompt text "csharp >" is not displayed making it a bit disorienting to use. It's difficult to distinguish between the input and the output of your expressions. [<b>Update</b>: <a href="http://mono.1490590.n4.nabble.com/C-REPL-shell-on-Windows-in-Mono-2-6-does-not-work-at-all-tp1507889p2164791.html">This bug was subsequently fixed </a> and it is now available with the version 2.6.7 release.]<br />
<br />
Another is no <a href="http://en.wikipedia.org/wiki/Autocomplete#In_command_line_interpreters">autocomplete</a> functionality but this feature is only available in Mono's GUI command console, 'gsharp'. GSharp requires an additional install of the mono-tools package available on the Windows platform download page under the link named "Gtk# for .NET". Once installed, to launch gsharp:<br />
<br />
<ol><li>Start -> All Programs -> Mono 2.6.1 for Windows -> Mono-2.6.1 Command Prompt</li>
<li>c:\> gsharp</li>
</ol><br />
To activate autocomplete, type in part of a word and then hit <TAB>. Sometimes auto-completing words is slow in gsharp particularly if it has to search a large set of libraries. For example, typing the text "using Sys" and then <TAB> causes hanging since 'System' is the most top level .NET lib.<br />
<br />
I decided to stick with using gsharp (a.k.a. the "C# InteractiveBase Shell"?) over csharp plus cmd.exe combo not only because of the autocomplete feature but because it also does display a "csharp >" prompt. <br />
<br />
With those two issues resolved in gsharp, I continued to explore how the csharp REPL performed and behaved. The most striking deficiency I then encountered was when typing a statement containing invalid syntax, it did not show any output results. This was surprising since it goes against what I consider to be one of the hallmarks of a good REPL: immediate feedback not just on command/code that evaluated successfully but on things that failed to evaluate properly. Strange the lack of output...almost as if it was missing the 'Print' in REPL. <br />
<br />
With continued use, I noticed in the examples found in the <a href="http://www.mono-project.com/CsharpRepl">Mono REPL documentation</a> that each expression or statement requires the ";" character at the end of it to produce any visible output. Otherwise, it gets ignored. I was expecting the same behavior found in the Visual Studio's <a href="http://stackoverflow.com/questions/794255/how-do-you-use-the-immediate-window-in-visual-studio/1361136#1361136">Immediate Window</a> where ";" is not always required. Not sure what the advantage of having to always type ";" (other than as a constant reminder that you are using a C based language in a REPL). Just seems like an extra unneeded keystroke. <br />
<br />
However, reading more of the REPL docs, it implies multiple declarations can be made on a single line using ";" as a delimiter:<br />
<br />
<pre>csharp> var a = "why so many semi-colons?"; 5; "more stuff!";
"more stuff!"
csharp> a;
"why so many semi-colons?"
csharp>
</pre><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">All three statements get evaluated but only the last one ("more stuff!") is printed to the screen.</div><br />
The docs also explicitly state the inverse: one declaration ending with ";" can be spread across multiple lines:</tab></tab></tab></tab><br />
<blockquote>"...Statements and expression can take multiple lines, for example, consider this LINQ query that displays all the files modified in the /etc directory in the last week. The prompt changes from "csharp" to " >" to indicate that new input is expected..."</blockquote>and<br />
<blockquote>"...Multi-line input...If your code does not fit in a single line, you can enter expressions in multiple lines. The shell will not execute the code until a valid expression has been entered or a syntax error is flagged. A special prompt is shown to indicate that ics is waiting for input..."</blockquote><br />
Although the docs states that multi-line input is supported, it does not seem to be true of my current install on Windows. On Linux, I can do this:<br />
<br />
<pre>csharp> var list = new int [] {1,2,3};
csharp> var b = from x in list
> where x > 1
> select x;
csharp> b;
</pre><br />
On Windows, however, the special indented continuation prompt " >" never appears when a new line is returned if the ";" character is not included. This is unfortunately a notable flaw in the REPL tool on the Windows platform. <br />
<br />
In addition to the online documentation, another source describing the more common commands is available in the REPL itself. Just type "help;":<br />
<pre>"Static methods:
Describe(obj) - Describes the object's type
LoadPackage (pkg); - Loads the given Package (like -pkg:FILE)
LoadAssembly (ass) - Loads the given assembly (like -r:ASS)
ShowVars (); - Shows defined local variables.
ShowUsing (); - Show active using decltions.
Prompt - The prompt used by the C# shell
ContinuationPrompt - The prompt for partial input
Time(() -> { }) - Times the specified code
quit;
help;
TabAtStartCompletes - Whether tab will complete even on emtpy lines
"
</pre>Of course, a couple of these commands exhibit some quirks. If type 'ShowUsing();' it does not display anything in the console although it is expected to do so (it behaves like this on Linux). After trying the command several times, I looked at the original command prompt window from which gsharp launched and saw the results of the command showing in <i>there</i>. The same was true with 'ShowVars()'. Recommend keeping the command prompt console in view while using gsharp to see any output piped outside of it. (<a href="https://bugzilla.novell.com/show_bug.cgi?id=450264">The issue appears to been logged at the mono project as a bug</a>.)<br />
<br />
Perhaps to truly escape all of these OS specific limitations, I might be better off <a href="http://www.google.com/buzz/110648242062368902162/YDwvCaPzQQy/I-finally-upgraded-my-dev-box-at-work-to-Win7-The"> running Mono's REPL within a Linux VM on Windows</a>. However, this approach negates the benefits of a cross-platform framework that Mono aspires to be.<br />
<br />
Despite these minor difficulties, Mono's C# REPL has been a nice addition to my .NET development toolbox. It has proved useful when I needed a deeper understanding of how delegates and lexical closures behave in C# and when figuring out how to do <a href="http://lexicalclosures.blogspot.com/2008/09/comprehending-list-comprehensions.html">list comprehensions</a> in C# using List<T>.ConvertAll instead of LINQ. It provides a quick, frictionless way of observing and interacting with the functionalities of C# and the .NET libraries.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-21454494027138181842010-03-30T23:35:00.000-07:002022-10-18T11:58:08.356-07:00Database schema changes unit tested using TSQLUnitHow does one test the existence of a primary key (PK) constraint belonging to a table in a database? Simple, right? Just write a test that intentionally violates that constraint. One's initial impulse would be to write a test inspecting the data contained within the PK's table. This approach is sensible and can be viewed as the conventional state-based style of testing. <br />
<br />
Let's try it. The following test using <a href="http://lexicalclosures.blogspot.com/2008/11/unit-testing-databases-with-tsqlunit.html">TSQLUnit</a> attempts to insert in a table a new row with the same value for its primary key as another existing row's PK:<br />
<pre>create procedure dbo.ut_SubProduct_PkConstraint
as
begin
declare @failMessage varchar(200),
@err int,
@primaryKeyError int
set @primaryKeyError = 2627
/* RUN TEST */
-- grab row already in table and try to re-insert it.
insert into dbo.SubProduct
select top 1 sp.*
from dbo.SubProduct sp
set @err = @@error
/* ASSERT TEST */
if @err <> @primaryKeyError
begin
set @failMessage = 'PK constraint not defined.'
exec tsu_failure @failMessage
end
end
</pre>The above code will not work as a good test because error messages regarding the failure to insert a row with a duplicate PK will occur that can not be suppressed in TSQLUnit's test runner (as viewed in the 'Results' window in Sql Server Management Studio). (Even with some not-too-in-depth research, I could not find a means to hide or suppress these error messages using TSQL.)<br />
<br />
Instead, it can be tested more cleanly using SQL Server's info schema views:<br />
<pre>create procedure dbo.ut_Product_PrimaryKey
as
begin
declare @failMessage varchar(200),
@tableName varchar(40),
@primaryKeyColumn varchar(40)
set @tableName = 'Product'
set @primaryKeyColumn = 'ProductID'
if not exists
(
select c.table_name, c.column_name, c.data_type
from information_schema.columns c
inner join information_schema.key_column_usage kcu
on c.table_name = kcu.table_name and c.column_name =
kcu.column_name
inner join information_schema.table_constraints tc
on tc.table_name = kcu.table_name and tc.constraint_name =
kcu.constraint_name and tc.constraint_type = 'PRIMARY KEY'
where c.table_name = @tableName and kcu.column_name = @primaryKeyColumn
)
begin
set @failMessage = 'PK constraint not defined for ''' + @tableName + '.' + @primaryKeyColumn + ''''
exec tsu_failure @failMessage
end
end
</pre>That's it. You now have a test that handles primary key constraints. <br />
<br />
Of course, this is not the last and only time where primary keys will require testing. While the above sproc does a satisfactory job, it is not reusable for any other tables. Let's refactor it into a more generic "Assert" sproc in an xUnit style:<br />
<pre>create procedure dbo.tsux_AssertPrimaryKeyExists
/* This is an extension to the TSQLUnit framework. */
(
@tableName varchar(40),
@keyColumn varchar(40),
@failMessage varchar(255)=null
)
as
begin
if not exists
(
select c.table_name, c.column_name, c.data_type
from information_schema.columns c
inner join information_schema.key_column_usage kcu
on c.table_name = kcu.table_name and c.column_name =
kcu.column_name
inner join information_schema.table_constraints tc
on tc.table_name = kcu.table_name and tc.constraint_name =
kcu.constraint_name and tc.constraint_type = 'PRIMARY KEY'
where c.table_name = @tableName and kcu.column_name = @keyColumn
)
begin
if @failMessage is null
set @failMessage = ' PK constraint not defined for ''' + @tableName + '.' + @keyColumn + ''''
exec tsu_failure @failMessage
end
end
</pre>Your actual test becomes more compact and easier to understand:<br />
<pre>create proc dbo.ut_Product_PrimaryKey
as
begin
exec dbo.tsux_AssertPrimaryKeyExists
@tableName = 'Product',
@primaryKeyColumn = 'ProductID',
end
</pre>Any verification of pure schema changes such as the creation of new tables, columns, constraints, etc. via unit tests is better served using SQL Server's system tables and views to query the necessary meta information. This has proven to be more preferable than performing data centric state based tests (This technique reminds me a little of using reflection in .NET to do testing) <br />
<br />
The earlier test for primary keys can be also applied to foreign key constraints as well. However, it requires some additional pieces to validate including the table and column being referenced. After (once again) finding the suitable tsql needed via an online search, here is the test:<br />
<pre>create procedure dbo.tsux_AssertForeignKeyExists
/* This is an extension to the TSQLUnit framework. */
(
@tableName varchar(40),
@foreignKeyColumn varchar(40),
@referenceTable varchar(40),
@referenceColumn varchar(40),
@failMessage varchar(255)=null
)
as
begin
if not exists
(
/*
This is modified version of the tsql query used to retrieve foreign key info
courtesy of:
http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/Q_22952666.html
*/
select
object_name(fkeyid) as TableName,
a.name as FKColumn,
object_name(constid) as FKConstraint,
object_name(rkeyid) as ReferenceTable,
b.name as ReferencedColumn
from sysforeignkeys f
inner join syscolumns a on a.id = f.fkeyid and a.colid = f.fkey
inner join syscolumns b on b.id = f.rkeyid and b.colid = f.rkey
where
fkeyid = object_id( @tableName )
and a.name = @foreignKeyColumn
and object_name(rkeyid) = @referenceTable
and b.name = @referenceColumn
)
begin
set @failMessage = 'FOREIGN KEY does not exist for ''' + @tableName + '.' + @foreignKeyColumn + ''''
exec tsu_failure @failMessage
end
end
</pre>Now the new unit test would plainly look like this:<br />
<pre>create procedure dbo.ut_Product_ForeignKeys
as
begin
exec dbo.tsux_AssertForeignKeyExists
@tableName ='Product',
@foreignKeyColumn ='CategoryID',
@referenceTable ='Category',
@referenceColumn ='CategoryID'
end
</pre><a href="http://www.agiledata.org/essays/databaseTesting.html">Database Testing: How to Regression Test a Relational Database</a>, which details areas of any database that should be tested, had been an influence on developing these types of <a href="http://en.wikipedia.org/wiki/Data_Definition_Language">data definition language (DDL)</a> tests. As an example, referential integrity is mentioned as being important area to test. On the surface, it seems a bit overkill to set up tests for PKs and FKs. However, even superfically minor data errors can be costly. <br />
<br />
I was once tasked with expanding the size of a core primary key column that had multiple dependencies to other tables (including views and sprocs). As expected, to make the change required temporarily dropping the PK and FK constraints on those other tables and then add them back after applying the change.<br />
<br />
Herein lies the risk. What if the "adding back" part was accidentally forgotten and not included in the change script? What if it was temporarily commented out with the intention to uncomment it later but overlooked? Allowing a deficient, regression-inducing script to rollout into a live production environment would be poor software development.<br />
<br />
What automated unit tests provide in this situation is the insurance and safety net that the constraints are less likely to be missed or forgotten. They enforce the existence of those key constraints and firmly establish them as requirements for the database. It further instills greater confidence to make these sorts of changes by providing immediate feedback during development (and not much later) if the constraints are not set.<br />
<br />
Another situation where unit testing using meta data came in handy was increasing the data type length of a column. Initially, when the size of varchar for a few columns needed to increase, I had some data heavy tests performing row insertions. These tests were fragile since they'd break the TSQLUnit test runner itself if a test inserted data larger than the expected size. Instead, I created a generic assert stored procedure that used the system sproc, 'COL_LENGTH': <br />
<pre>create procedure dbo.tsux_AssertColumnLength
/* This is an extension to the TSQLUnit framework. */
(
@tableName varchar(40),
@columnName varchar(40),
@expectedLength smallint,
@failMessage varchar(200)=null
)
as
begin
declare @actualLength smallint
select @actualLength = COL_LENGTH(@tableName, @columnName)
if @expectedLength != @actualLength
begin
set @failMessage = 'Column length for ''' + @tableName + '.' + @columnName + ''' does not match expected value.'
exec tsu_failure @failMessage
print 'Expected: ' + cast(@expectedLength as varchar(3))
print 'Actual: ' + cast(@actualLength as varchar(3))
end
end</pre><br />
Again, reusable and reliable code which avoids relying on sql compiler errors to indicate test failure.<br />
<br />
Other examples of generic asserts for columns aside from length:<br />
<br />
* tsux_AssertColumnExists<br />
* tsux_AssertColumnDataType<br />
<br />
Also, some other areas using test assertions:<br />
<br />
* permissions on an object (quite important and often overlooked until too late)<br />
* stored procedure parameter lengths<br />
* existence of tables, views, and other similar database objects<br />
* schemabinding on views<br />
<br />
What's more is that these types of tests can be easily code generated. For example, if database changes included adding new columns, then unit tests can be generated by extracting the columns' meta data (e.g. name, datatype, length, etc.) as defined in a sql script (or even from an xml file or spreadsheet). The same can be done with existing data objects and structures requiring DDL changes but the meta data can be pulled from the database's system tables/views. In this situation, you gain some automatic test coverage for your current schema before attacking it with alterations. <br />
<br />
Not sure if any of the techniques detailed earlier could be applied to check constraints (e.g. inserted/updated datetime value should not be greater than today's date, etc.). Perhaps, check constratins can be sufficiently managed with simple, direct unit tests using data rather than using meta data. (Although it could be argued that the logic for most check constraints should exist in the application code and not in the database. However, in practise, this is not always the case.) For now, without an immediate need, it will remain speculative.<br />
<br />
This is likely my last post on TSQLUnit. In the future, my data access tests will probably be created in and executed from the application code rather than on the SQL Server side. However, if I do find myself in a scenario where database-only unit tests are needed, I'd probably try out <a href="http://tst.codeplex.com/">T.S.T. the T-SQL Test Tool</a> since it has built-in assert functions.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-34875726765458859872010-02-24T00:13:00.000-08:002010-06-06T00:25:12.910-07:00Software developer: an asset not a costI was quoted last year in a post, <a href="http://www.javaworld.com/community/node/2836">Convincing the Boss to Pay for Developer Training</a>:<br /><blockquote><br />"...For example, software developer Ray Vega points out, "The 'culture' of the company is dedicated (sic) by how it makes money and who is responsible for helping to making that money." In general, he says, software companies whose product is technology-based tend to be better at providing and paying for skill improvement resources for tech employees. When the technology workers training is closely related to company revenue, it's easier to get the boss to listen. However, Vega adds, "If you work on an application that has no direct association with how the company makes money (for example, an internal time tracking application for an insurance company) then it will certainly be an uphill battle."</blockquote><br /><br />My response was based on not just my own work experience but also on an old Joel Spolsky article <a href="http://www.joelonsoftware.com/articles/FiveWorlds.html">Five Worlds</a> (which was provided to the author in <a href="http://www.linkedin.com/answers/technology/software-development/TCH_SFT/463465-10138">my original response</a> to her research on the topic). The "world" you write code for makes a significant difference in the overall health of your professional career beyond just training costs.<br /><br />Most good programmers probably don't need formal "training" focused on a vendor specific technology, platform, or framework with a potentially short shelf life. They'd more likely learn on their own by creating and working on a side project or on a simple prototype specifically for that purpose. However, one exception is if the training class included like-minded individuals with whom one can collaborate and reciprocally learn from. Sadly, these are rare to find and difficult to vet prior to investing one's time in a chosen course.<br /><br />That said, sometimes it doesn't hurt to be exposed to informational seminars, conferences, or coursework that cover the enduring fundamentals of software development (or even computer science) that people tend to forget or simply don't know.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-20447427530179038522009-02-27T23:38:00.000-08:002022-10-18T11:58:08.356-07:00It's True (And Not False): Assert Equality in TSQLUnitWhen working with <a href="http://en.wikipedia.org/wiki/XUnit">xUnit style frameworks</a> like <a href="http://www.nunit.com/index.php">NUnit</a>, it is generally expected to find support for <a href="http://en.wikipedia.org/wiki/Assert">assertions</a>. Asserts are an indispensable tool in any testing process. Unit testing at its core is simply verifying whether something has occurred or not occurred i.e. checking if the state has changed marking it as either true or false. Built-in assert syntax provides programmers a means to perform this level of testing repeatedly and consistently. One of the more common assertions is to compare the values of two items, such as variables or objects, sometimes referred to as equality asserts. As an example, here is C# test code using NUnit to test a method that simply adds two arguments:<br /><pre>// some static class<br />public static int Add(int firstItem, int secondItem)<br />{<br /> return firstItem + secondItem;<br />}<br /><br />[Test]<br />public void Verify_sum_of_5_and_3_equals_8()<br />{<br /> Assert.AreEqual(Add(5,3), 8);<br />}<br /></pre>Imagine my disappointment to discover the absence of equality assert functionality in <a href="http://tsqlunit.sourceforge.net/">TSQLUnit</a>. The framework inexplicably does not have any native support for them. (Perhaps this might be because the project has not been active for a very long time.) Instead, you have to construct your own tailor-made assertions outside of the framework itself.<br /><br /><a href="http://lexicalclosures.blogspot.com/2008/11/unit-testing-databases-with-tsqlunit.html">While using TSQLUnit</a> without the aid of asserts, a definite pattern emerged as tsql plumbing code rapidly began to replicate in numerous tests. Here is an example of a unit test, plagued with the same identical code found in multiple places, whose purpose is to verify whether a 'ProductNumber' column has been successfully updated:<br /><pre>create procedure dbo.ut_CanUpdateProductNumber<br />/* UNIT TEST */<br />as<br />begin<br /> declare @expectedProductNumber varchar(40),<br /> @actualProductNumber varchar(40),<br /> @currentAccountID varchar(16)<br /><br /> set @expectedProductNumber = '1234567890'<br /><br /> -- find a account to test<br /> select top 1 @currentAccountID=AccountNbr<br /> from dbo.Account<br /><br /> /* RUN TEST */<br /> -- update product number<br /> update dbo.Account<br /> set ProductNumber = @expectedProductNumber<br /> where AccountNbr = @currentAccountID<br /><br /> -- get updated product number<br /> select @actualProductNumber=l.ProductNumber<br /> from dbo.Account l<br /> where l.AccountNbr = @currentAccountID<br /><b> <br /> /* ASSERT TEST */<br /> if @actualProductNumber != @expectedProductNumber<br /> begin <br /> exec tsu_failure 'The product number is not the same.'<br /> print 'Expected: ' + @expectedProductNumber<br /> print 'Actual: ' + @actualProductNumber <br /> end</b><br />end<br /></pre>The duplicated code is the conditional 'if' block at the end of the stored procedure where the assertion is executed using 'tsu_failure', the obligatory proc call to the TSQLUnit framework that transforms your tsql code into a real live unit test. All test sprocs are built around this critical function. Unfortunately, 'tsu_failure' does not handle actual comparisons between two values, but only <i>after</i> the comparison has been made. It was not designed to recognize when value comparisons are useful or required. Instead, the surrounding test code is responsible for making that evaluation, in this case, using a custom conditional statement not originating from any TSQLUnit function.<br /><br />In addition, being accustomed to seeing in NUnit test messages that display the detailed results of value comparisons (i.e. expected against actual), print statements were added to the test procs to simulate that same text. For example, the following is what would be shown if the aforementioned test were to fail:<br /><blockquote>The product number is not the same.<br />Expected: 1234567890<br />Actual: 0987654321</blockquote>Although a welcomed improvement in the feedback provided by the test runner's results, I found myself repeatedly injecting that same structure over and over in numerous other tests.<br /><br />When this form of repetition occurs, a strategy can be adopted of either (1) continuing copying and pasting code, (2) using code generation, or (3) formulating and developing some reusable code component to manage the duplication. With # 1 and # 2 being obvious maintenance sinkholes draining away any value earned from the test code, implementing # 3 was a more sensible choice.<br /><br />To fight off test code rot, the 'if' block was refactored and encapsulated into a separate, shareable stored procedure (think <a href="http://www.refactoring.com/catalog/extractMethod.html">Extract Method</a>). Nothing extravagant but quite effective:<br /><pre>create procedure dbo.tsux_AssertAreEqual<br />/* This is an extension to the TSQLUnit framework. */<br />(<br /> @expected varchar(8000),<br /> @actual varchar(8000),<br /> @failMessage varchar(255)<br />)<br />as<br />begin<br /> if @actual != @expected<br /> begin <br /> exec tsu_failure @failMessage<br /> print 'Expected: ' + @expected<br /> print 'Actual: ' + @actual <br /> end<br />end<br /></pre>Calling this new proc provides the familiar, sought-after "Assert.AreEqual" functionality found in NUnit and in a lot of other test frameworks. The old 'if' block in the original test was subsequently replaced with the new assert proc:<br /><pre>create procedure dbo.ut_CanUpdateProductNumber<br />/* UNIT TEST */<br />as<br />begin<br /> /*...unaltered code...*/<br /><b> <br /> /* ASSERT TEST */<br /> exec dbo.tsux_AssertAreEqual<br /> @expectedProductNumber,<br /></b><b><b> </b></b><b>@actualProductNumber,<br /> 'The product number is not the same.'</b><br />end<br /></pre>Now that we have a general utility assert sproc for the varchar type, we still have other data types, including int and datetime, that can also benefit from assertions of their own. Since TSQL does not support a flexible language feature like <a href="http://en.wikipedia.org/wiki/C_Sharp_%28programming_language%29#Generics">C# type generics</a> for its stored procedures, creating sprocs for each data type is the only clear option to expand this functionality beyond vachar:<br /><pre>tsux_AssertDatesAreEqual<br />tsux_AssertIntsAreEqual<br /></pre>I can understand why the creator of TSQLUnit might have not initially built asserts into the framework since it requires building one for each every kind of data type in TSQL. Therefore, in its place, the burden falls on the user (i.e. me) to add additional asserts to the test code base as the need arises.<br /><br />One kind of assert involving condition testing that might be interesting to implement but I am uncertain if it is remotely doable is this:<br /><pre>set @sqlConditionToEvaluate = (@expectedColor = 'BLUE'<br />AND @expectedSize = 23 OR StartDate between '1/1/11' and '2/2/22')<br />exec tsux_AssertIsTrue(@sqlConditionToEvaluate)<br /></pre>or more concisely:<br /><pre>exec tsux_AssertIsTrue(@expectedColor = 'BLUE'<br />AND @expectedSize = 23 OR StartDate between '1/1/11' and '2/2/22')<br /></pre>Maybe this could be achieved using dynamic sql and with storing each condition to verify within a 'TABLE' data type variable (functioning as an array) that can be looped checking each one to be true or false. However, implementing complex asserts to this extreme extent is a strong indication that TSQLUnit might no longer conceivably be the appropriate tool to write unit tests. It might be preferable to consider alternate unit testing frameworks that operate entirely <i>outside</i> of the database using a language other than TSQL that is better equipped for elaborate conditions and logic flow.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-8416807644832025349.post-22276136162961962202009-01-30T22:46:00.000-08:002010-01-11T23:59:45.815-08:00Paging in ASP.NET using NHiberate<a href="http://www.hibernate.org/343.html">NHibernate</a>, the object-relational mapping (ORM) framework for .NET, supports custom pagination for collections. It provides a potential alternative to the built-in paging mechanism and native support found in ASP.NET GridView web controls. NHibernate exposes in the API for IQuery and ICriteria two methods, <a href="http://www.hibernate.org/hib_docs/nhibernate/1.2/reference/en/html/queryhql.html#queryhql-tipstricks">SetFirstResult and SetMaxResult</a>, that can be used to enable paging:<pre><br /> Collections are pageable by using the IQuery interface with a filter:<br /><br /> IQuery q = s.CreateFilter( collection, "" ); // the trivial filter<br /> q.setMaxResults(PageSize);<br /> q.setFirstResult(PageSize * pageNumber);<br /> IList page = q.List();<br /></pre><br /><b>Adding Paging to the Base DAO</b><br /><br />The design and architecture of my project, where NHibernate style pagination shall be introduced, was deeply influenced by <a href="http://www.codeproject.com/KB/architecture/NHibernateBestPractices.aspx">Billy McCafferty's NHibernate Best Practices with ASP.NET</a> resulting in the proliferation of <a href="http://en.wikipedia.org/wiki/Data_Access_Object">Data Access Objects (DAO)</a> throughout the guts of the application. Each DAO maps one-to-one to a single matching table in the database.<br /><br />For example, a 'Task' table would have a corresponding 'TaskDao' in the data access layer (DAL) of the application. All of these DAOs inherit from the same base class 'GenericNHibernateDao' responsible for containing commonly shared code including managing NHibernate sessions and providing generic methods for summoning a specific persisted instance by ID and for saving/deleting an existing instance.<br /><br />The initial step to implement paging was adding the following members to the aforementioned base class 'GenericNHibernateDao':<br /><pre><br />public abstract class GenericNHibernateDao<t> : IGenericDao<t><br />{<br /> public int PageSize<br /> {<br /> get { return _pageSize; } set { _pageSize = value; }<br /> }<br /><br /> public int PageNumber<br /> {<br /> get { return _pageNumber; } set { _pageNumber = value ; }<br /> }<br /><br /> protected int GetFirstResultPosition()<br /> {<br /> return _pageSize * (_pageNumber - 1);<br /> }<br /><br /> protected void SetPagingFor(IQuery query)<br /> {<br /> query.SetFirstResult(<wbr>GetFirstResultPosition());<br /> query.SetMaxResults(_pageSize)<wbr>;<br /> }<br /><br /> protected void SetPagingFor(ICriteria criteria)<br /> {<br /> criteria.SetFirstResult(<wbr>GetFirstResultPosition());<br /> criteria.SetMaxResults(_<wbr>pageSize);<br /> }<br /><br /> /*<br /> other non-paging related members...<br /> */<br />}<br /></t></t></pre><br />The property 'PageSize' gets/sets the number of instances expected to be displayed on a web page for a specific strongly typed collection. Essentially, it handles how many rows to return for a embedded control (such as a GridView) on the page. This property is intended to be internally consumed by NHibernate's 'SetMaxResults' method belonging to IQuery or ICriteria. For example:<br /><pre><br />IQuery query = Session.CreateQuery();<br />query.SetMaxResults(_pageSize)<wbr>;<br /></pre><br />'PageNumber' specifies the set of multiple instances to be returned and displayed on a web page as identified and grouped by a page sequence numeric value (This is the equivalent of 'PageIndex' property for GridViews). For example, if you had a total of five pages worth of data rows then displaying the second page would require setting the page number value to '2' (e.g. _dao.PageNumber = 2). Just as with 'PageSize', this property is intended to be used by the method 'GetFirstResultPosition' as will be explained next.<br /><br />The protected method 'GetFirstResultPosition' calculates the actual row at which to start paging based on the values provided by 'PageSize' and 'PageNumber'. This method's return value is expected to be passed to NHibernate's 'SetFirstResult' method, once again, part of IQuery or ICriteria. For example:<br /><pre><br />IQuery query = Session.CreateQuery();<br />query.SetFirstResult(<wbr>GetFirstResultPosition());<br /></pre><br />The overloaded method named 'SetPagingFor' performs the actual paging functionality via the 'SetFirstResult' and 'SetMaxResult' methods of IQuery and ICriteria. (As an aside, the use of IQuery to build and execute HQL is much more common on this project in comparison to the almost non-existent use of ICriteria).<br /><br />The base class 'GenericNHibernateDao' was also modified to have one of its existing methods 'GetAll' call 'SetPagingFor'. (The 'GetAll' method simply returns a list of all strongly type objects in the database without any specified criteria or filtering):<br /><pre><br />// GenericNHibernateDao class<br />public IList<t><T> GetAll()<br />{<br /> ICriteria criteria = Session.CreateCriteria(<wbr>persitentType);<br /> <span style="font-weight: bold;">SetPagingFor(criteria);</span> // this is newly added!<br /> return criteria.List<t>();<br />}<br /></t></t></pre><br />As it shall be made clear later, this new line of code will optionally provide paging functionality when 'GetAll' is requested, if needed, but not required per se.<br /><br /><b>Applying Paging Functionality in the DAOs</b><br /><br />Now that the base class 'GenericNHibernateDao' has been updated to manage paging, any of its derived DAO classes are instantly equipped to perform paging themselves. To actually invoke the paging functionality for any of the DAOs, simply set the appropriate values for page size and page number as shown in this example for the 'TaskDao' class:<br /><pre><br />// In context (such as a Presenter or Controller class<br />// part of an MVP/MVC applied framework)<br />TaskDao _taskDao = new TaskDao();<br />_taskDao.PageSize = 20;<br />_taskDao.PageNumber = 5;<br />IList<TaskDao><task> list = _taskDao.GetAll();<br /></task></pre><br />Without paging, 'GetAll' would have returned something in the neighborhood of 100 or so rows. With paging, the returned list will instead be only 20 rows starting at row (i.e. position) # 80. While a hundred rows might not sound too substantial, larger sets of data can have a more noticeable effect on your application's day-to-day operations if your table contains tens or hundreds of thousands of rows. Your performance will be progressively impacted as your application scales with more data.<br /><br />On the other hand, NHibernate's paging will significantly decrease the size of your result set. This is in stark contrast to the default behavior of the existing paging available in any GridView control. If this native functionality of the control is used, it will return all rows from the database and then page them in memory. As mentioned earlier, this can lead to slower performance as your data grows. More on this later.<br /><br />If, for any reason, paging is not essential for a particular web page in possession of a control bound to a strongly typed list (for example,a very small static list of data) then setting the page size and number properties is not required at all. Simply call 'GetAll' by itself disregarding the 'PageSize' and 'PageNumber' properties and the DAO should return all rows found in the associated database table. What makes this possible is that the default values for those two paging properties are formally declared in the DAO base class to behave as expected for "non-pageable" collections:<br /><pre><br />public abstract class GenericNHibernateDao<t> : IGenericDao<t><br />{ <br /> protected int _pageSize = -1; // default for unlimited page size<br /> protected int _pageNumber = 1; // default for first item in collection<br /><br /> // more members...<br />}<br /></t></t></pre><br />Typically, a DAO might have other custom methods that return more narrowly focused (i.e. filtered) lists of typed objects than what 'GetAll' offers. For these other DAO methods, the same pattern can be followed by adding the one line of code calling 'SetPagingFor'. For example, the 'TaskDao' might have a method that returns tasks that were completed in 2006 excluding any other tasks not done within that same year:<br /><pre><br />// TaskDao class<br />public IList<TaskDao><task> GetTasksCompletedIn2006()<br />{<br /> IQuery query = Session.CreateQuery(<br /> "some HQL statement that filters tasks by 2006...");<br /> <span style="font-weight: bold;">SetPagingFor(query);</span> // newly added!<br /> return query.List<task>();<br />}<br /></task></task></pre><br />It is generally good practice not to pass the values for page size and number directly via the parameter list for any of these custom filtered data access methods. A few reasons to avoid this: (a) it can quickly clutter the intent of the method, and (b) it would prevent the paging functionality from being optional and, as a result, become less flexible, less reusable, and more cumbersome to work with.<br /><br />Consequently, while it would be tempting to write the method's signature as such:<br /><pre><br />IList<TaskDao><task> list = _taskDao.<wbr>GetAllTasksWithSubtasks(<br /><wbr> param1, param2, param3, ..., paramN, <span style="font-weight: bold;">pageSize, pageNumber</span>);<br /></task></pre><br />It is preferable to do the following instead:<br /><pre><br /><span style="font-weight: bold;">_taskDao.PageSize = 20;</span><br /><span style="font-weight: bold;">_taskDao.PageNumber = 5;</span><br />IList<TaskDao><task> list = _taskDao.<wbr>GetAllTasksWithSubtasks(<wbr><br /> param1, param2, param3, ..., paramN);<br /></task></pre><br /><b>The Downside of NHibernate Paging with ASP.NET Controls </b><br /><br />As indicated earlier, one weakness of NHibernate's paging when combined with ASP.NET's GridView control involves some loss of "out-of-the-box" functionality. Under more conventional circumstances, when a collection of objects are bound to a GridView control, one of the built-in paging features of that control is to automatically render on the web page the navigation hyperlinks for the pages. For example, you might see following below your control:<br /><pre><br />" 1 2 3 ... 10 "<br /></pre><br />The GridView's default behavior assumes that all data bound directly to its DataSource can be paged as long as the number of items of that data is greater than its PageSize property value. Hence, if that condition is met, the control will slice up and present the data as appropriate.<br /><br />Conversely, this is not true when using NHibernate style pagination. When binding to an NHibernate paged list method, the GridView's PageSize value will usually be set to the same size as the paged data list, a mere subset of the total data found in the database (or data source). Behind the scenes, the PageIndex property of the GridView will reset to zero because the number of items bound to its DataSource is less than or equal to the PageSize. Therefore, the GridView is under the impression that the data it receives is not pageable and, in turn, disables any paging features, unaware that more items do indeed exist but were just not provided at that moment. The paging features lost include not just immobilizing navigation links but also removing the availability of the event handlers linked with changing the page index.<br /><br />Without the convenience of auto-generating navigation links, two options emerge that might help to produce the same desired behavior:<br /><br /><ol><li>Inherit from the GridView control and attempt to confirm whether or not if any paging methods can be overridden some how or in some way</li><li>Create a custom, reusable user control to implement the navigation features</li></ol><br />Initially, the easier path was taken by developing a very rudimentary and simple implementation of option # 2. This entailed providing in extremely basic custom controls functionality for navigating between pages using homemade "Previous" and "Next" buttons. Currently, it is not implemented as a shareable user control nor is it able to display the pages counts. I intend on exploring option # 1 a bit more in the event that option # 2 evolves into something more elaborate and unwieldy. Until then, it is a work in progress.<br /><br />A third option does exist involving the possible use of the <a href="http://msdn.microsoft.com/en-us/library/9a4kyhcx%28VS.80%29.aspx">ObjectDataSource</a> control. However, that strategy can lead down a less than desirable path for the following reasons:<br /><br /><ol><li>loss of control of how the data access is managed</li><li>ease of maintainability diminishes if any widespread changes were to emerge in the future within areas of the application relying on paging</li><li>disrupts and conflicts with how the MCP/MVC methodology is currently applied on the project</li><li>increased difficulties in writing and running reliable automated unit tests</li></ol><br />All things considered, despite some trade offs and a little bit of work, leveraging NHibernate's ability to do paging can be an area that could contribute significantly in optimizing and improving the performance of a data-intensive web application.Unknownnoreply@blogger.com5tag:blogger.com,1999:blog-8416807644832025349.post-6096153973520833442009-01-15T23:57:00.000-08:002009-01-19T09:13:53.031-08:00Test Driven BloggingYears ago when I first started learning and practicing <a href="http://en.wikipedia.org/wiki/Test-driven_development" target="_blank">Test Driven Development (TDD)</a>, I became extremely overzealous blindly believing all code needed to be unit tested (one of the more extreme cases were <a href="http://lexicalclosures.blogspot.com/search/label/TSQLUnit" target="_blank">my over-the-top uses of TSQLUnit</a> which included testing simple, low risk <a href="http://en.wikipedia.org/wiki/Drop_%28SQL%29"><i>DDL</i></a> changes that, in retrospect, was probably a bit too much with so little to gain.) This attitude was probably a common rookie mistake of becoming enamored with a new methodology (or language or framework or ....) by assuming this is how all software should be written. It is as if I had discovered the elusive magic bullet that a lot of programmers spend most of their careers searching for. Unit testing, along with its subset TDD, has steadily grown in popularity spanning the diverse programming communities spectrum. How could I possibly go wrong in my new found beliefs?<br /><br />However, this past year, I have seriously reconsidered my views on unit testing. Instead, I have <i>significantly </i>curtailed the use of unit testing by being more selective as to when it should be applied in code. I have realized that while testing is an important tool, it is far from the "be all and end all" that others have proclaimed it to be. Instead, I readjusted my thinking to focus on what <span></span>is <span>genuinely </span>the most important goal in programming which is to <b>actually deliver good working software in an iterative manner</b>.<br /><br />On <a href="http://stackoverflow.com/" target="_blank">Stack Overflow (SO)</a>, <a href="http://en.wikipedia.org/wiki/Kent_Beck" target="_blank">Kent Beck</a> an early pioneer of TDD and a creator of <a href="http://www.junit.org/" target="_blank">JUnit</a> (the precursor for all modern xUnit style frameworks), provided a very interesting and perhaps unexpected response to the question: <a href="http://stackoverflow.com/questions/153234/how-deep-are-your-unit-tests#153565" target="_blank">How deep are your unit tests?</a><br /><br /><blockquote style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;" class="gmail_quote">I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence.<br /></blockquote><br />The reality of what Beck wrote can not be ignored. (At least, I <i>think</i> it's him. The tone and content of <a href="http://stackoverflow.com/users/13842/kent-beck" target="_blank">his other responses on SO</a> seem to indicate that it might just be.) Principally, working code is more important than the tests themselves. Tests can be just one of numerous methods to achieve the goal of delivering good quality software on a frequent basis but tests must certainly <b>not </b>overshadow this intent.<br /><br />Along with Beck, it was unquestionably reassuring to read a recent post of Ayende (a notable .NET blogger/developer) that also <a href="http://ayende.com/Blog/archive/2008/12/21/the-tests-has-no-value-by-themselves-my-most-successful.aspx" target="_blank">tackles head on this very topic regarding software delivery and testing</a>:<br /> <blockquote style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;" class="gmail_quote"><p>I want to make it explicit, and understood. What I am riling against isn't testing. I think that they are very valuable, but I think that some people are focusing on that too much. For myself, I have a single metric for creating successful software:</p><blockquote> <p><i>Ship it, </i><i>often</i>.</p></blockquote></blockquote>Keep in mind these statements are from the creator of <a href="http://ayende.com/projects/rhino-mocks.aspx" target="_blank">Rhino Mocks</a>, one of more popular mock testing frameworks for .NET. Subsequently, it carries a lot weight as it was written by someone who does truly and thoroughly understand the virtues of unit testing and TDD. Ayende is someone who I most certainly admire being as close to the ideal model of a <a href="http://forums.construx.com/blogs/stevemcc/archive/2008/03/31/chief-programmer-team-update.aspx">10x programmer</a> in not just the realm of .NET but in programming in general (some of that admiration stems from being in complete awe of <a href="http://ayende.com/Blog/" target="_blank">his unearthly prolific blogging</a>). Having him confirm something that I myself have come to realize on my own this past year definitely does help to validate my current views. Quite simply, I have substantially toned down my TDD rhetoric and restrained my testing impulses in favor of renewing my true objectives in programming.<br /><br />My learning TDD coincided with my learning of .NET and C#. At that point in time, I compulsively consumed the writings of a somewhat noted blogger in the .NET community who relentlessly championed unit testing and TDD. Practically treating this person's words as pure gospel, I considered this individual to be quite representative of the <a href="http://altdotnet.org/" target="_blank">ALT.NET community</a> by serving as one of the leading voices for all that it embodies. This is a community deeply immersed in the ways of unit testing and TDD.<br /><br />However, with my now newly reformed outlook on software development, I have become much more wary of that blogger's "best practices" crusades. The blog still continues to be obsessively fanatical over the non-negotiable importance of unit testing and TDD almost to the exclusion of any other competing methodologies or tools. Their dogmatic writings assuredly fall into the "focusing on that [testing] too much" camp as described earlier in Ayende's post. Originally, this programmer's "test driven blog" once held one of the few selective spots on my blog's "Blog List" links. But, since it no longer carries the same relevance to me as it used to, I decided to remove it from the list.<br /><br />Nevertheless, I still read that blog from time to time because it does have great information and observations regarding software development and best practices in the .NET ecosystem. In addition, I still consider myself a TDD practitioner despite reducing the scope and influence of unit testing in relation to my programming style. I just now better grok what my priorities are.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-8416807644832025349.post-40181697460831403412008-12-10T00:05:00.000-08:002022-10-18T11:58:08.356-07:00Disentangling Nested Transactions in TSQL<span style="font-style: italic;"></span>The following TSQL error (# 266) surfaced while using <a href="http://tsqlunit.sourceforge.net/index.html" target="_blank">TSQLUnit</a> to test a recently altered stored procedure for a legacy database:<b><span style="color: rgb(0, 102, 0);"><br /></span></b><span style="color: rgb(0, 102, 0);"></span><blockquote><span style="color: rgb(0, 0, 0);">"Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 0."</span><br /></blockquote>After struggling for several hours to pinpoint the problem, the debugging and troubleshooting process provided me with a better understanding of how nested <span class="nfakPe">transactions</span> are managed in SQL Server. First and foremost, truly just "one" transaction can exist per connection, a fact previously not known to me. This ultimately plays a pivotal role in the above error as revealed in the <a href="http://manuals.sybase.com/onlinebooks/group-as/asg1250e/svrtsg/@Generic__BookTextView/14130;pt=14302" target="_blank">the Sybase Product Manual entry on Error 266</a>:<br /><blockquote>When a stored procedure is run, Adaptive Server maintains a count of open transactions, adding 1 to the count when a transaction begins, and subtracting 1 when a transaction commits. When you execute a stored procedure, Adaptive Server expects the transaction count to be the same before and after the stored procedure execution. <span style="font-weight: bold;">Error 266 occurs when the transaction count is different after execution of a stored procedure than it was when the stored procedure began</span>...<br /></blockquote>Furthermore:<br /><blockquote>Error 266 occurs when you are using nested procedures, and procedures at each level of nesting include <b>begin</b>, <b>commit</b>, and <b>rollback transaction</b> statements. If a procedure at a lower nest level opens a transaction and one of the called procedures issues a <b>rollback transaction</b>, Error 266 occurs when you exit the nested procedure.<br /></blockquote>The sproc under test is the "lower nest level" procedure in relation to the TSQLUnit sproc. What I discovered was that the TSQL for most of the existing stored procedures in the legacy database (such as the one being tested) contained this statement:<br /><pre>IF @@TRANCOUNT <> 0 ROLLBACK TRANSACTION<br /></pre>which was commonly found in two separate and distinct locations in the code:<br /><ol><li>at very beginning of the sproc before anything interesting occurs</li><li>at the very end when it's too late for it to be effective<br /></li></ol>As an example:<br /><pre>create proc someProc<br />as<br />begin<br /> IF @@TRANCOUNT <> 0 ROLLBACK TRANSACTION<br /> /* main body of the sproc */<br /> IF @@TRANCOUNT <> 0 ROLLBACK TRANSACTION<br />end<br /></pre>This slightly odd coding convention was interfering with TSQLUnit's native stored procs' ability to perform rollbacks of all changes once an individual unit test completes execution. The aforementioned rollback statement sabotaged the outer transaction of the TSQLUnit sproc (or any other calling sproc, for that matter) by resetting the @@trancount value causing the error to be raised.<br /><br />Truthfully, unless I'm missing something, that rollback code does not really provide any benefit for the original sprocs themselves. I do not fully understand the rationale for placing these lines of code in most of the database's sprocs. Perhaps it is some overcautious (and overzealous) attempt to handle any unforeseen data failures forcing cleanups at every step. However, nothing indicates this to be true or probable. Point # 2 listed above is especially puzzling since in most cases an explicit COMMIT has already taken place right before it reaches the offending bit of code. Once a commit occurs, why bother attempting a rollback?<br /><br />The bug fix was simply to remove all occurrences of the rollback transaction code since it caused more harm than good. Maybe the reason for its existence was to intentionally prevent unwanted calls from other sprocs thereby keeping them isolated and independent. As with most legacy code written by others long gone, I (or any other maintenance developer that comes after me) may never know.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-8416807644832025349.post-79209148222111989082008-11-30T18:00:00.000-08:002010-04-30T03:17:03.522-07:00Speak to me...Interpreting C#I came across <a href="http://www.pluralsight.com/community/blogs/craig/archive/2008/11/19/a-c-repl-in-clojure.aspx" target="_blank">A C# REPL (in Clojure)</a> which discusses a means by which to interact with and run C# code via an interactive command-line by using <a href="http://en.wikipedia.org/wiki/Clojure" target="_blank">Clojure</a> and <a href="http://en.wikipedia.org/wiki/IKVM" target="_blank">IKVM.NET</a> . Now, Clojure I have heard of (a Lisp implementation that runs on the JVM) but this is my first time hearing about IKVM.NET (which I now know to be a .NET implementation for JVM). The post describes how combining these two technologies gives you the potential of working with a static language like C# in a way that is quite common in the world of dynamic languages such as Python, Ruby, Boo, etc.<br />
<br />
Having an interactive code interpreter is a huge productivity boost. It allows you to easily run and test your code as you write and modify it while not needing to pay the dreaded <a href="http://www.codinghorror.com/blog/archives/000860.html">compilation tax</a> which can disrupt your development flow. This dramatically tightens and shortens the feedback loop on how well your code works ranging from whether it is behaving as intended for meeting spec requirements to much more quickly identifying any runtime bugs than you would using a development process common to traditionally compiled static languages. (As an aside, these are generally the same reasons that are given for creating and maintaining automated unit tests. Same goal but different methods.)<br />
<br />
Not sure how well this C#/Clojure/IKVM.NET approach works or how well it realistically performs (typically, interpreted languages are slower). What is certain is that this unusual implementation requires the use of the very foreign-looking Lisp parentheses. I will openly admit as someone who does not program in Lisp it strikes me as kind of strange to use and see parentheses with C# but aside from this peculiar syntactical idiosyncrasy the general concept of <a href="http://en.wikipedia.org/wiki/REPL">REPL</a> with C# overshadows even this oddity. This quote sums up its overall appeal in the world of C#:<br />
<br />
<blockquote style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;" class="gmail_quote">A REPL is a Read-Eval-Print Loop, which is a fancy way of saying "an interactive programming command line". It's like the <b>immediate window in the Visual Studio debugger </b>on steroids, and its absence is one of the increasing number of things that makes C# painful to use as I gain proficiency in more advanced languages.<br />
</blockquote><br />
This is precisely how I felt when <a href="http://lexicalclosures.blogspot.com/2008/07/snake-bitten-by-python-rip-nant.html" target="_blank">I initially started to learn and use Python</a>. Suddenly, coding in C# with Visual Studio certainly seems to now be relatively more restrictive. Similarly, whenever I have had to touch any VBA code (yes, that does happen from time to time) I customarily inhabit the VB Editor's Immediate Window (IW) pushing its limits by attempting to use it in a manner that is similar to how I code in Python.<br />
<br />
For example, in the VB language, not only is it not required to declare the data type of a variable but it is even unnecessary to explicitly declare the variables themselves (usually this is done using the 'Dim' keyword but this can be avoided by quite simply not including the 'Option Explicit' statement). Subsequently, the first time a value is assigned to a variable, the variable will automatically and implicitly be defined on the stack just like it does in Python. As a result, you can somewhat attain that same level of interaction with code in VB (via IW) as you would in Python (via its standard interpreter) potentially gaining the productivity benefits of writing less code in contrast to strongly typed languages.<br />
<br />
My increased reliance of the <a href="http://msdn.microsoft.com/en-us/library/f177hahy%28VS.80%29.aspx">Immediate Window</a> also extends to Visual Studio when coding in C# but it requires more work and syntax overhead versus IW in the old VB Editor. Overall, it is not quite the same experience as in Python. Regardless, as <a href="http://lexicalclosures.blogspot.com/2008/08/applying-good-software-development.html" target="_blank">I have previously written</a>, frequent use of the VB Editor's IW led me to lean heavily on the one in Visual Studio whenever coding in C#. Prior to that, I had somewhat forgotten it even existed. In fact, <a href="http://jopinblog.wordpress.com/2007/06/18/missing-immediate-window-in-visual-studio-2005/" target="_blank">in VS 2005, the IW sometimes is missing</a> and difficult to view when not in debug mode (<a href="http://www.vitalygorn.com/blog/post/2008/01/Missing-Immediate-Window-in-VS2008.aspx">this is allegedly also true for VS 2008</a>). This is discouraging as it probably contributes to most .NET developers not favoring its use in more situations.<br />
<br />
While I have seen <a href="http://stackoverflow.com/questions/47537/c-console" target="_blank">other attempts at providing an interactive console for C#</a> the following are ones I have noted to possibly try out in the very near future:<br />
<ul><li> <a href="http://tirania.org/blog/archive/2008/Sep-08.html" target="_blank">Interactive C# Shell</a></li>
<li><a href="http://www.codeproject.com/KB/cs/csi.aspx" target="_blank">CSI: A Simple C# Interpreter</a></li>
</ul>I am extremely curious if (and hopeful that) Microsoft will provide an improved implementation for VS's IW when the <a href="http://www.25hoursaday.com/weblog/2008/11/08/CIsTheNextPythonDuckTypingAndC40.aspx" target="_blank">more dynamic C# 4.0</a> becomes mainstream. (How long before we have an <i>official</i> C#Script? It worked for VB and VBScript.)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-13792486950227189422008-11-14T23:37:00.001-08:002010-08-10T23:11:08.230-07:00Unit testing databases with TSQLUnit<span style="font-style: italic;"></span>In preparation to doing a lot of database work (think schema/DDL changes) for the next several months, I thought it would be important to define methodologies and tools to manage the development process. Having been heavily applying a <a href="http://en.wikipedia.org/wiki/Test-driven_development">test-driven development (TDD)</a> approach in a recent project for an ASP.NET C# web application, it seemed logical and now quite natural to continue pursuing a similar path and mindset but one now more targeted towards database development.<br />
<br />
<span style="font-weight: bold;">TSQLUnit and TDD</span><br />
<br />
To facilitate a smooth flow with TDD, it was key to identify a suitable testing framework to support my goal. After some research and comparisons, I eventually settled on <a href="http://tsqlunit.sourceforge.net/"><span class="nfakPe">TSQLUnit</span></a>, an <a href="http://en.wikipedia.org/wiki/XUnit">xUnit</a> framework for MS SQL Server. It is interesting to note that for every programming language that exists out there, it is expected that <a href="http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks">someone will eventually create an xUnit style testing framework for it</a>. Why should SQL Server be any different?<br />
<br />
Database testing could be performed from the application side but the nature of the development was primarily schema/DDL changes (e.g. mostly dropping and removing tables, columns, etc.) that involved very little or no business logic. Otherwise, app code would be a better and more appropriate candidate to handle testing the db. This particular scenario lent itself to finding and using something that was as close to the database as possible ideally where the unit tests can be written and supported by the TSQL language itself. The intent was to try and follow one of the core tenets of any xUnit framework which is to do testing in the same language as the code that requires to be changed. TSQLUnit seemed the best fit.<br />
<br />
Regardless, it is quite probable, in the future, that I might disregard all of above and still switch to testing all aspects of the database using C#. This is not because of TSQLUnit but because the TSQL language itself can be so cumbersome to work with especially if you are accustomed to the power and flexibility of a non-declarative programming language such C#. For the moment, it is satisfies my current needs for testing so it will suffice.<br />
<br />
<span style="font-weight: bold;">A Few Key Features of TSQLUnit</span><br />
<br />
One feature of TSQLUnit that is critical for any type of database testing is the ability to rollback changes after each test run completes ensuring that the database is in a nice clean state before the next test starts. This simply implies that the the data will not remain altered when the next test runs potentially tainting its results. Not only does this apply to all data/DML changes (inserts, updates, deletes) but DDL changes (creates, alters, drops) as well. For example, if Test A creates a table or alters a column for an existing table then those schema changes will most certainly rollback and be undone before Test B starts to execute. Therefore, the tests themselves are isolated from each other meaning that all of the unit tests can run independently.<br />
<br />
In addition, TSQLUnit supports the xUnit features of <a href="http://en.wikipedia.org/wiki/XUnit#Test_Execution">Setup/Teardown</a> for a test suite. For example, if you have five tests that are logically related and are dependent on the same preconditions then you can prepare all of your data in a single, shared 'Setup' fixture and TSQLUnit will automatically run this before each associated test thereby enforcing no co-dependencies between the tests. Note that one obvious drawback with repeating the same setup multiple times might be performance if the tests rely on loading large volumes of data for the purpose of testing. As for 'Teardown', it provides essentially the same functionality as 'Setup' with the one difference of executing at the <span style="font-style: italic;">end</span> of each test and not at the beginning.<br />
<br />
TSQLUnit is fairly decent but I initially struggled not with the framework itself but with the usual problems with database testing such as setup of data, rollback of changes, etc. It is a lot of work because unlike in application code it is hard to stub out dependencies in the database objects such as tables, stored procedires, etc. (Especially when dealing with a 225+ column table in the good ol' legacy db I was so fortunate to be tasked to make modifications.)<br />
<br />
<span style="font-weight: bold;">Running TSQLUnit using NAnt</span><br />
<br />
Once a few initial unit tests were created using TSQLUnit, the process of running them was incorporated into the project's NAnt database build script. Just like the database build itself, the unit tests makes use of the <a href="http://nantcontrib.sourceforge.net/release/0.85-rc1/help/tasks/sql.html">NAntContrib <sql> task</a> for their creation and execution. These are the high level targets defined in the build script:<br />
<pre><!-- after building database including applying change scripts -->
<call target="InstallTSQLUnit"/>
<call target="LoadUnitTests"/>
<call target="RunUnitTests"/>
</pre><br />
'InstallTSQLUnit' is a single sql script that runs against your target db creating all of the necessary tables, stored procedures, etc. that the TSQLUnit framework requires to function. This is simply the basis and source code for the framework itself that is integrated with the database.<br />
<br />
'LoadUnitTests' runs the sql scripts that contains the tests themselves located in a 'Tests' folder in the project's directory. To elaborate, it creates all of the unit test sprocs along with, if necessary, injecting any test data into the database that is not handled by the Setup fixtures.<br />
<br />
Finally, 'RunUnitTests' calls the <span class="nfakPe">TSQLUnit</span> command to run all of the unit tests inside the database:<br />
<pre><target name="RunUnitTests" description="Run all unit tests "
if="${installUnitTesting}">
<sql connstring="${connectionString}" transaction="true" delimiter=";" delimstyle="Line">
exec tsu_runTests;
</sql>
</target>
</pre>The NAnt db build is configurable so that you can specify whether or not to include or exclude the unit tests. This might be necessary if you just simply want to only build the database and nothing more perhaps for the sake of reducing the amount of the build time if the intent is to use database for another purpose other than testing.<br />
<br />
In the command-line window, as NAnt logs the progress of the database build, at the end it will bubble up the output results of <span class="nfakPe">TSQLUnit</span> indicating whether the tests passed or failed:<br />
<pre>...
ExecuteSql:
[echo] loading .\Database\Tests\ut_DeleteTables_VerifyTablesDropped.sql
RunUnitTests:
[sql] ====================================================================
============
[sql] --------------------------------------------------------------------
------------
[sql] SUCCESS!
[sql]
[sql] 14 tests, of which 0 failed and 0 had an error.
[sql] Summary:
[sql] Run tests ends:Feb 20 2008 11:44AM
[sql] --------------------------------------------------------------------
------------
[sql] Testsuite: (14 tests ) execution time: 76 ms.
[sql] ====================================================================
============
[sql] Run tests starts:Feb 20 2008 11:44AM
[sql] ====================================================================
============
BUILD SUCCEEDED
Total time: 6.1 seconds.
</pre>Currently, the overall database build does not fail if the unit tests themselves fail since NAnt can not catch and interpret <span class="nfakPe">TSQLUnit</span> results. This codeproject <a href="http://www.codeproject.com/KB/database/NAntSQLSrvrScrptValidtr.aspx" target="_blank">MS SQL Server script validation with NAnt Task</a> seems to provide that level integration between NAnt and SQL Server unit testing. Perhaps, this functionality might be implemented later, but, for now, it is good enough.<br />
<br />
<b>Other related posts on TSQLUnit</b>:<br />
<br />
<a href="http://lexicalclosures.blogspot.com/2009/02/its-true-and-not-false-how-to-assert.html">It's True (And Not False): Assert Equality in TSQLUnit</a><br />
<br />
<a href="http://lexicalclosures.blogspot.com/2010/03/database-schema-changes-unit-tested.html">Database schema changes unit tested using TSQLUnit</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-58712258858910522882008-11-09T22:28:00.000-08:002008-11-09T23:36:53.566-08:00Relax and "unwind" with recursionI was recently looking to change some values in test data that was scripted in TSQL insert statements that I was not too fond of. This data is loaded on development and test builds of a database for a system that I work on.<br /><br />The data changes center around some arbitrary ID numbers. These values are not system generated but can be considered to be natural primary keys since they are defined externally outside the system and they are used and referenced by the end-users signifying meaning in the business domain. However, for development and testing purposes, the values can be anything.<br /><br />At first, the numbers reflected actual numbers that preexisted outside the testing and dev environments but I found them too difficult to remember when testing the application manually or when defining automated unit test case scenarios. Therefore, I decided on replacing the existing ID numbers with a sequence of numbers that I can recall more easily:<br /><blockquote>1234567890, 2345678901, 3456789012, ..., 0123456789<br /></blockquote>I figured this list of numbers would be easier to remember on the fly rather than if they were randomly generated. You can see an obvious pattern here hence lending themselves well to be committed to (human) memory. The initial number in the list is a universally simple value of 1234567890. Each subsequent number is the same except that the first digit is moved from the first position to the last position as compared to the preceding value in the list. For example, the number '1' in 1234567890 gets relocated to the end to become 2345678901.<br /><br />I decided to write a simple Python script to update all of the data sql files with these new numbers. Now, I could have quite obviously created the list manually in a very short time in my script such as:<br /><pre>new_id_numbers = [1234567890, 2345678901, 3456789012, ..., 0123456789]</pre>However, I always like to find opportunities to challenge my programming abilities even in some inconsequential situations such as this one. The challenge I created for myself was to see how I can <span style="font-style: italic;">programmatically </span>generate this exact same list. Evidently, a pattern exists with these numbers implying code can be written to produce this particular set of values without the need to manually type them out.<br /><br />My first impulse was that I could write some kind of iterative loop to generate the list:<br /><pre># iteration version<br />def get_swapped_numbers(numbers):<br /> first = numbers[0]<br /> for item in numbers: <br /> if len(numbers) == len(first) or len(first) == 0:<br /> return numbers<br /> last = numbers[-1]<br /> new = last[1:] + last[:1]<br /> numbers.append(new) <br /> return numbers<br /> <br />>> initial_value_list = ['22566'] <br />>> print get_swapped_numbers(initial_value_list)<br />['22566', '25662', '56622', '66225', '62256']<br /></pre>It works. Nothing unusual here. However, I recognized that this particular numerical pattern is indeed recursive in that each value in the list is dependent on the previous value and that <a href="http://en.wikipedia.org/wiki/Recursion_%28computer_science%29">recursion</a> could be used to solve this as well. Like probably most programmers, I have never used recursion in actual code but it has been one of those fundamental concepts in computer science that I felt that I needed to better understand and explore. As a result, this simple pattern provided an excellent opportunity to flex my recursive muscles.<br /><br />After mulling over for some time on how to implement a recursive function for this specific problem here is what I eventually churned out:<br /><pre># recursion version<br />def get_swapped_numbers(numbers): <br /> first = numbers[0]<br /> if len(numbers) == len(first) or len(first) == 0:<br /> return numbers<br /> last = numbers[-1]<br /> new = last[1:] + last[:1]<br /> numbers.append(new)<br /> return get_swapped_numbers(numbers) # recursion!!<br /> <br />>> initial_value_list = ['22566'] <br />>> print get_swapped_numbers(initial_value_list)<br />['22566', '25662', '56622', '66225', '62256']<br /></pre>To build your recursive function, the most important piece is that somewhere in the body of the function the function must call itself by name:<br /><pre> return get_swapped_numbers(numbers) # recursion!!<br /></pre>Without it, well...you just have a plain ol' vanilla function.<br /><br />The other very important piece of a recursive function is to include some kind of a conditional statement that acts like a guard clause which is commonly referred to as the 'base case' in recursion lingo. This base case will cause your recursive calls to stop and begin to "unwind" itself as it spirals back up to the first initial call. If a base case is not included then you run the risk of unleashing an ugly infinite loop.<br /><br />The base case is really no different than the break/return point in the iteration example:<br /><pre> if len(numbers) == len(first) or len(first) == 0:<br /> return numbers<br /></pre>In fact, any recursive function can be written and expressed as an iterative loop. This is probably why most programmers do not use recursion since you can achieve the same results using more familiar, day-to-day techniques. In addition, traditionally in most languages, recursion tends to perform slower than their loop counterpart.<br /><br /><a href="http://stackoverflow.com/questions/72209/recursion-or-loop">So, why even use recursion when iteration can suffice?</a> The real benefit is that sometimes a recursive function can end up being more readable and clearer in intent than an iterative function (although in my example it is a wash in this aspect). It also seems that certain types of recursion (e.g. <a href="http://en.wikipedia.org/wiki/Tail_recursion">tail recursion</a>) are optimized by compilers so it could truly provide better performance with the added bonus of improved readability.<br /><br />Truthfully, writing the loop was a lot easier than the recursion version. I am uncertain that the reason is because writing loops are so ingrained and second nature to me having created so many over the years in code coupled with the fact that this was my real first attempt to implement a true recursive function. Admittedly, it was a bit trippy mentally working through each function call to ensure avoiding the dreaded infinite loop. However, the same could be said when I first learned writing loops many years ago. With more time and practice, I can probably create recursive function without exerting any more thought than I would with creating a loop. All in all, even if I never again use recursion in my code, it is still an important technique for programmers to understand and recognize especially if encountered in code you did not write.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-8416807644832025349.post-54437337720437916652008-10-17T23:06:00.000-07:002008-10-17T23:49:30.991-07:00Is JavaScript the "Next Big Language"?I recently read an old post of <a href="http://steve-yegge.blogspot.com/" target="_blank">Steve Yegge</a> from early 2007 entitled <a href="http://steve-yegge.blogspot.com/2007/02/next-big-language.html" target="_blank">The Next Big Language</a>. In it, he describes what are the fundamental characteristics that a new programming language must have if it is to become popular. Throughout his post, he hints at what he thinks the next big language ("NBL") might be but intentionally does not mention it by name for various reasons.<br /><br />However, it seems to be quite evident that Yegge is probably referring to <a href="http://en.wikipedia.org/wiki/JavaScript" target="_blank">JavaScript</a> (and\or the <a href="http://en.wikipedia.org/wiki/ECMAScript" target="_blank">ECMAScript standard</a>). A lot of his later posts seem to support that conjecture. One post in particular entitled <a href="http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html" target="_blank">Dynamic Languages Strike Back</a> really does emphasize the strong possibility of JavaScript being the NBL:<span style="color: rgb(0, 102, 0);"></span><br /><blockquote>"...So JavaScript. JavaScript has been really interesting to me lately, because JavaScript actually does care about performance. They're the first of the modern dynamic languages where performance has become an issue not just for the industry at large, but also increasingly for academia.<br /></blockquote><blockquote>Why JavaScript? Well, it was Ajax. See, what happened was... Lemme tell ya how it was supposed to be. JavaScript was going away. It doesn't matter whether you were Sun or Microsoft or anybody, right? JavaScript was going away, and it was gonna get replaced with... heh. Whatever your favorite language was.<br /><blockquote></blockquote></blockquote><blockquote>I mean, it wasn't actually the same for everybody. It might have been C#, it might have been Java, it might have been some new language, but it was going to be a <em>modern</em> language. A fast language. It was gonna be a scalable language, in the sense of large-scale engineering. Building desktop apps. That's the way it was gonna be.<br /></blockquote><blockquote>The way it's <em>really</em> gonna be, is JavaScript is gonna become one of the smokin'-est fast languages out there. And I mean <em>smokin'</em> fast..."</blockquote>Yegge has also mentioned using <a href="http://en.wikipedia.org/wiki/Server-side_JavaScript" target="_blank">JavaScript server-side</a> for some project he was working on at Google. Yes, <i>server-side </i>and not client side. If what he is saying is true and that the JavaScript language is continuing to evolve with <a href="http://en.wikipedia.org/wiki/JavaScript#Features" target="_blank">more and more features transforming it into a powerful multi-paradigm language</a> and programmers start using it not just client side but more on the server side as its performance becomes better and faster then it could really position itself as a first class language on par with C# and Java. Quite a leap from JavaScript's somewhat humble beginnings.<br /><br />Obviously, Yegge is just one voice (albeit a very popular one) on the topic but <a href="http://www.codinghorror.com/blog/archives/001163.html" target="_blank">the importance of JavaScript as a fundamental language for web apps</a> is not to be taken lightly. This reminded me of a recent conversation about JavaScript with <a href="http://www.sneal.net/blog/" target="_blank">another software developer</a> with whom I used to work. He had stated that he felt like he was currently doing more coding in the client-side/UI layer with JavaScript than he was coding in the middle layer with C# even though the middle tier is where he is expected to be doing most of his development. He also made the observation how the web application he is working on was starting to resemble more like a classic <a href="http://en.wikipedia.org/wiki/Client-server" target="_blank">Client-Server</a> application despite the presence of a middle tier.<br /><br />The implication of this is that this type of architecture is one that more people might be gravitating towards when developing robust web applications without realizing it. Interesting that with the advent and surging popularity of web applications, the decline of desktop applications ("fat", "thick" clients) was greatly exaggerated. They are coming back just in an altered and less recognizable form.<br /><br />The statements and experiences of the developer I mentioned earlier (combined with Yegge's writings) are extremely telling and is something to which we should pay attention. The increase use of JavaScript and more importantly how and where it's being used is a trend that should be closely watched in the ever changing world of modern software development.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-8416807644832025349.post-42590910906279243512008-09-24T06:08:00.000-07:002022-10-18T11:58:08.356-07:00Comprehending List Comprehensions<div>Since using <a href="http://www.python.org/">Python </a>to <a href="http://lexicalclosures.blogspot.com/2008/07/snake-bitten-by-python-rip-nant.html">write build scripts </a><a href="http://lexicalclosures.blogspot.com/2008/07/snake-bitten-by-python-rip-nant.html">(as well as for code generation) </a><a href="http://lexicalclosures.blogspot.com/2008/07/snake-bitten-by-python-rip-nant.html">to support my development process</a> I have come to increasingly learn and appreciate the power of the Python language. A recent coding situation demonstrated to me how understanding alternatives available in a multi-paradigm language such as Python can amplify the limitations of another language such as C# and can have a real influence on how you think about and write code.<br /><br /><span style="font-weight: bold;">Building working code first...</span><br /><br />In my current project, <a href="http://odetocode.com/Blogs/scott/archive/2008/02/03/11746.aspx">the database for the system is maintained under source control</a>. In the directory of my project's database files, a sub directory exists named 'CurrentReleaseOnly' which contains database unit tests written using the <a href="http://tsqlunit.sourceforge.net/">TSQLUnit</a> framework. The purpose of this folder is to segregate out tests that are really just "one time only" with no intention to retain once the db schema changes are implemented in production <a href="#footnote-1">[1]</a>. In addition, the sub directory contains a plain old text file named 'README.TXT" which serves to explain why the folder exists to other developers working on the project.<br /><br />Let's say in that folder, 'CurrentReleaseOnly', I have 4 files, three of which are unit test sprocs and one is that 'readme' file:<br /><ol><li>ut_uspVerifyDroppedColumns.sql</li><li>ut_uspVerifyDroppedTable.sql</li><li>ut_uspVerifyArchivedData.sql</li><li>README.TXT</li></ol>Since the project files are maintained under <a href="http://www.perforce.com/">Perforce </a>(P4) <a href="#footnote-2">[2]</a>, one of the project maintenance scripts needs to permanently delete all files within that folder from the source code repository with one exception of the aforementioned 'readme' file. In this example, that would imply deleting files # 1 through 3 but keeping # 4. Obviously, from one development cycle to another the number of files eligible for deletion would vary but only one would always be retained (i.e. # 4).<br /><br />The command line syntax in P4 to open a file for delete is the following:<br /><pre>p4 delete file1.txt file2.txt file3.txt<br /></pre>The goal is to output and execute the above command which can be acheived with, the following was what was originally coded to achieve this action:<br /><pre><br /><div style="margin-left: 0px;">def open_for_delete_unit_tests_<wbr>from_previous_release():<br />""" Open for delete in Perforce unit tests from previous release """<br /><br /># find files to open for delete<br />exclude_file = 'README.TXT'<br />delete_files_dir = os.path.join(unit_test_dir, 'CurrentReleaseOnly')<br /><br /># build delete command text<br />all_files = os.listdir(delete_files_dir)<b><br />for f in files:<br /> cmd = cmd + f + ' '<br /><br />cmd = 'p4 delete ' + cmd</b><br /><br /># execute 'open for delete' in source control<br />p4 = os.popen(cmd)<br />p4.read()<br />p4.close()<br /><br />return True</div></pre></div>Quite simply, a list object is first populated with the names of all the files in the directory. Then a loop through each file name is performed incrementally building the P4 command text. Eventually, the output for the command text should look like this:<br /><pre>p4 delete ut_uspVerifyDroppedColumns.sql <span style="font-weight: bold;">README.TXT</span><br />ut_uspVerifyDroppedTable.sql ut_uspVerifyArchivedData.sql<br /></pre>However, as evident, it will also delete the 'readme' file which, if you recall, needs to remain to document the use of the directory. To make this happen, the following conditional statement was added to the loop:<br /><pre>...<br />for f in files:<b><br /> if f == exclude_file:<br /> continue</b> <br /> cmd = cmd + f + ' '<b><br /></b><b> </b><br />cmd = 'p4 delete ' + cmd<br />...<br /></pre>As a result, the P4 output changes to now exclude the 'readme' file:<br /><pre>p4 delete ut_uspVerifyDroppedColumns.sql<br />ut_uspVerifyDroppedTable.sql ut_uspVerifyArchivedData.sql<br /></pre>We now have achieved our desired output and it actually works. All is good except...<br /><br /><span style="font-weight: bold;">Implementing List Comprehensions</span><br /><br />Now, you are thinking "So what? What is the big deal? This is rudimentary programming that any four-year-old can do." Yes of course. However, I kept thinking that this was not very "<a href="http://faassen.n--tree.net/blog/view/weblog/2005/08/06/0">Pythonic</a>". Python is all about manipulating lists in an efficient and concise manner.<br /><br />I immediately went back and re-read some more about list comprehensions. Armed with a better grasp of this style of programming, the code was altered to now implement this alternative way of building the same P4 command:<br /><pre><div style="margin-left: 0px;">import os<br />...<br />def open_for_delete_unit_tests_<wbr>from_previous_release():<br />""" Open for delete in Perforce unit tests from previous release """<br /><br /># find files to open for delete<br />exclude_file = 'README.TXT'<br />delete_files_dir = os.path.join(unit_test_dir, 'CurrentReleaseOnly')<br /><br /># build delete command text<br />all_files = os.listdir(delete_files_dir)<br /><b>delete_files = [(delete_files_dir + os.sep + f) for f in all_files \<br /> if f != exclude_file]<br />cmd = 'p4 delete ' + ' '.join(delete_files)<br /></b><br /># open for delete in source control<br />p4 = os.popen(cmd)<br />p4.read()<br />p4.close()<br /><br />return True<br /></div></pre>By using <a href="http://docs.python.org/tut/node7.html#SECTION007140000000000000000" target="_blank">Python's implementation of list comprehension,</a> it allowed the building of a new list by filtering out the unneeded items based on some defined criteria. After the new filtered list is created the final p4 command text is generated using the join method of a single space (" ") string.<br /><br />List comprehensions also support not just <a href="http://en.wikipedia.org/wiki/Filter_%28higher-order_function%29" target="_blank">filtering</a> but also repeatedly applying the same function against each item in a list. This is considered a variation and more shorthand way of implementing the <a href="http://en.wikipedia.org/wiki/Map_%28higher-order_function%29" target="_blank">"map" function</a> commonly found in functional languages. <a href="http://docs.python.org/tut/node7.html#SECTION007130000000000000000" target="_blank">Python does indeed explicitly support the classic functional programming functions of 'map', 'reduce', and 'filter'</a> but it's list comprehensions are an even more "concise" way of implementing map and filter<a href="#footnote-3">[3].</a><br /><br />If you were not impressed with the previous filtering example then here is another more trivial example of applying a 'map' function using list comprehensions. This time the file extension is stripped out from each file name contained within a given list <a href="#footnote-4">[4]</a>:<br /><pre>>> files = ['ut_uspVerifyDroppedColumns.<wbr>sql', 'ut_uspVerifyDroppedTable.sql'<wbr>,<br />'ut_uspVerifyArchivedData.sql'<wbr>, 'README.TXT']<br />>> print <span style="font-weight: bold;">[f[:-4] for f in files]</span> # remove file extension using list comprehensions<br />['ut_uspVerifyDroppedColumns', 'ut_uspVerifyDroppedTable',<br />'ut_uspVerifyArchivedData', 'README']<br /></pre><span style="font-weight: bold;">Comparisons to SQL</span><br /><br />What really struck me about list comprehensions is how much it reminded me of the ubiquitous database language, SQL. Given my long experience with querying and data manipulation against sql databases I found Python's use and style of list comprehensions to be a much more interesting and maybe even more powerful. Shortly after noting the similarities I subsequently read that <a href="http://en.wikipedia.org/wiki/List_comprehension#History" target="_blank">list comprehensions were even considered for database querying</a>:<br /><blockquote>"Comprehensions were proposed as a query notation for databases and were implemented in the <i>Kleisli</i> database query language"<br /></blockquote>I plan to write a much longer post on my opinions regarding the future of SQL as a language but until then I will say the following. LINQ is a great attempt to bake into the C# language actual data querying features but with one major flaw. It still adopted the SQL syntax in the process which really does need its own makeover (or better yet a replacement).<br /><br />I'm sure the main reason for Microsoft's decision to closely model LINQ after SQL was to give .NET developers something they were already deeply familiar with and thereby more apt to use it. However, if Microsoft had perhaps used something resembling list comprehensions instead of SQLish syntax it might have made C# an even more powerful language by baking a more initutive and compact syntax <a href="#footnote-5">[5]</a><br />...<br /><p id="footnote-1"><br /></p><p id="footnote-1">[1] If you are someone who is a <a href="http://en.wikipedia.org/wiki/Test-driven_development">TDD</a> practitioner (like myself) you might be shouting: "How can you be throwing away unit tests after writing them? That is insanity and completely violates the the very essence of TDD!!!!". Yes, but just like any other methodology, the principles of TDD should not always be followed blindly and adhered to strictly. Sometimes, exceptions have to be made.<br /><br />In this particular instance, the reasons for dropping certain unit tests after a period of time were primarily due to performance. Since tests against a database tend to be slower than more traditional unit tests found in app code, I decided that after each production release any tests that are no longer valuable beyond the next release would be purged from the project's code base.<br /><br />For example, unit tests that are used to test such things as "one time only" migration of data from one database to another or tests that simplistically check for the existence (or, conversely, the non-existence) of db objects like columns or tables are good candidates for permanent removal from the suite of db tests. On the other hand, unit tests that assert and validate some complicated logic in stored procedures as an example would be kept and not be removed. Regardless, this is not point of this blog post so I digress...<br /></p><p id="footnote-2"><br />[2] Admittedly, I would rather be using the significantly less intrusive Subversion.<br /></p><p id="footnote-3"><br />[3] For some reason, the equivalent of 'reduce' is not supported by Python's list comprehensions. Perhaps it is because <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=98196" target="_blank">the creator of Python was not a fan of map/reduce/filter</a> since the time of its inclusion into the Python language. He especially seems to have a dislike for 'reduce'.<br /></p><p id="footnote-4"><br />[4] For improved readability, I could have defined a separate function stating more explicitly what exactly it was doing without resorting to using comments (which tend to be a 'code smell'):<br /></p><pre>def remove_file_extension(f): return f[:-4]<br />print [remove_file_extension(f) for f in files]<br /></pre>Or I could have used a lambda function for equal effect:<br /><pre>remove_file_extension = lambda f : f[:-4]<br />print [remove_file_extension(f) for f in files]<br /></pre><p id="footnote-5">[5] Actually, as I recently discovered <a href="http://www.25hoursaday.com/weblog/PermaLink.aspx?guid=3d5755bf-43cf-4d47-a7ec-b60f6b536702">C# does support map, reduce, and filter as of version 3.0.</a> (respectively, "Enumerable.Select", "Enumerable.Aggregate", and "Enumerable.Where") Not quite list comprehensions but definitely a huge lift for the language. In addition, my understanding is that F# being a functional language does support list comprehensions beyond the standard map/reduce/filter.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-79082773869796445862008-09-13T00:49:00.000-07:002008-09-14T22:54:46.128-07:00Who needs a good text editor? I write perfect codeThe first time I read <a href="http://www.pragprog.com/titles/tpp/the-pragmatic-programmer" target="_blank">Pragmatic Programmer</a> (a future classic) it strongly emphasized that programmers choose one good text editor and learn it well. This got me thinking about how important a good text editor is if you write code for a living. I am of the school of thought that <a href="http://www.martinfowler.com/articles/newMethodology.html" target="_blank">code is design</a>. I want to increase the speed by which I write code so as to match my thoughts. Code is the concrete extensions of my thoughts on how to implement software. This means that code should be easy to manipulate and thereby be malleable and fluid in nature. Therefore, it makes perfect sense to me why the book discusses the virtues of using a good editor.<br /><br />At the time of my first reading PP, my text editors were Visual Studio (the default IDE for .NET developers such as myself) along with the plain vanilla Notepad. Inspired by PP, I went on to use <a href="http://www.flos-freeware.ch/notepad2.html" target="_blank">Notepad2</a> and then eventually moved on to the more robust and extensible <a href="http://notepad-plus.sourceforge.net/uk/site.htm" target="_blank">Notepad++</a>. However, I recently re-read PP because it is one of those book you need to keep referring back to make certain you are headed down the right path as a programmer. (Also, you tend to miss out on tidbits of good info due to faulty memory.) This time around I noted that one of the text editors they recommended was <a href="http://www.gnu.org/software/emacs/" target="_blank">Emacs</a>.<br /><br /><b>What is Emacs (and VI)?</b><br /><br />After researching about Emacs, I immediately got the impression that this is one of the text editors that serious, hardcore programmers, may, I dare say, hackers use. If you want to become one of those (or at least aspire to) then you might as well use what those individuals use because they obviously must know something, right? <a href="http://en.wikipedia.org/wiki/GNU_Emacs#History" target="_blank">Emacs has been around since the 1970's</a> making it one of the oldest text editors with active development still current as of today (2008). So, once again, something must be good about it, right?<br /><br />I noticed that another editor was consistently being referenced on most things I read about Emacs. That other text editor was <a href="http://www.vim.org/">VI (or VIM)</a>. Not surprisingly, just like a lot of things in the world of programming <a href="http://www.linux.com/feature/19661" target="_blank">a long running rivalry exists between Emacs and VI</a>. Now, unlike Emacs, I was, surprisingly, already familiar with VI since I had learned to use it in a technical class I took early in my career. Guess what? At the time, I hated it.<br /><br />My dislike for VI centered on the fact that I was weaned on more modern text editors such as Notepad and other Windows applications that using VI felt so foreign to me. I could not understand how anyone could even be efficient with a tool such as VI. Why couldn't I simply use the arrows keys, delete key, etc.? Why must I memorize and use some other combination of keys? In addition, the mode switching also confused me which entails the fact that editing text was not quite the same as reading it.<br /><br />VI reminded me too much of a word processing program I had used in the late 80's on my home PC (a Tandy if I recall). This made me view VI as being a relic that should no longer be needed in the modern world. Also, at the time (and prior to that) I hated having to learn and memorize command names and was not fully enamored with text-only, console-like environments (Microsoft did a really good job of making me dependent on GUIs and my mouse).<br /><br />Of course, I no longer hold any of what I now consider to be quite ridiculous and silly attitude and opinions regarding VI. I have completely repented and now understand how deeply wrong I was. Hey, what do you expect from a newbie programmer back then?<br /><br /><b>Choosing a New Text Editor</b><br /><br />Now, which one do I use? Emacs or VI? Truthfully, I really don't know. Just like most things <a href="http://vspedia.com/1084-vi-vs-Emacs" target="_blank">each have their pros and cons.</a><br /><br />However, I made my choice and decided to learn to use Emacs. I was swayed by the fact that Emacs as compared with VI has (1) so much more features (although might never use them all ;-) ) and (2) it is extensible. One long running criticism of Emacs (particularly from the VI community) was that it is very slow to load up and run (due to it using its own dialect of Lisp, an interpreted language and, as we all know, interpreted languages tend to be slower than compiled ones). Well, fortunately with modern systems this is no longer the case whatsoever (if anything more modern IDEs like Visual Studio are slow in comparison to Emacs)<br /><br />Probably the most difficult thing to using Emacs at first will be the same reason why I originally did not like VI: learning all of its essential commands. But now it's different because I <u><i>want</i></u> to learn it because I fully understand it's rewards. It will indeed be tough at first but, from what I read, once you do (at least the basic ones) your productivity should start to increase. To me it will be no different than when I first started learning the fantastic Visual Studio add-in, <a href="http://www.jetbrains.com/resharper/" target="_blank">ReSharper</a>, a refactoring tool. With Re#, I made the very deliberate effort to learn the keyboard commands instead of relying on the mouse in order to code faster. Typically, this is now a very common approach I take with most new development tools and applications that I start to use. Learn to use as many key commands as possible.<br /><br />One quick mention regarding setting up Emacs on Windows. If you want Emacs to be truly installed on your PC (meaning adding it to the Windows registry, adding a shortcut to your start menu, etc.) then I recommend <a href="http://derekslager.com/blog/posts/2006/12/emacs-hack-1-installing-emacs-on-windows.ashx" target="_blank">running the file 'addpm.exe' found in the bin folder from the zipped file for Emacs</a> . This is completely optional and you can obviously run and use Emacs without it. However, it does help to integrate it a bit more with your Windows environment.<br /><br /><b>Emacs and Visual Studio</b><br /><br />The book "Pragmatic Programmer" seems to imply that a text editor should be your main IDE. However, primarily being a .NET developer my primary IDE is, of course, Visual Studio. Therefore, I can not truly have Emacs as my primary. If I did, I'd miss out on some the features built into VS such as Intellisense. But, my main problem is <span style="font-style: italic;">really </span>missing out on the sheer power of ReSharper.<br /><br />But, wait, not all is lost. Believe it or not, as it turns out, <a href="http://www.jakevoytko.com/blog/2008/06/09/visual-studio-and-emacs-at-the-same-time/" target="_blank">Visual Studio actually natively supports changing your key bindings to use Emacs</a>! Now, I potentially might have the best of all worlds: <span style="font-weight: bold;">VS + Re# + Emacs</span>. Although VS does not obviously have all of Emacs' features, at least, I can continue to use and develop my Emac specific editing skills. Who knows? Since one of Emacs' greatest assets is extensibility it maybe possible to add some of the Re# features lacking into Emacs itself. (This would imply my learning ELisp but doubt it :-)) Perhaps some add-ins for Emacs already exist and I just have to find them.<br /><br />It turns out that Emacs is not the only thing that can be supported by VS. In addition, I recall reading earlier this year how <a href="http://blog.jpboodhoo.com/HookedOnVIM.aspx" target="_blank">Jean-Paul S. Boodhoo started using VI with Visual Studio</a>. (He also has some <a href="http://blog.jpboodhoo.com/SearchView.aspx?q=vim" target="_blank">more recent posts</a> on his experiences particularly VI with ReSharper) This was an early indication to me that perhaps maybe I was wrong about my opinion regarding VI and that it was, at the time, a very "green" programmer like myself just not understanding the power of a development tool and the inefficiencies of relying on a mouse. Perhaps down the road I might give VI a try as well.<br /><br />I will certainly have future postings on my experiences with Emacs. In the meantime, I have to make sure to avoid "<a href="http://en.wikipedia.org/wiki/GNU_Emacs#Emacs_Pinky" target="_blank">Emacs Pinky</a>".Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-53766978338452253142008-09-09T23:58:00.000-07:002010-01-11T23:59:45.819-08:00Cryptic Rhino Mock exception messages<span style="font-style: italic;"></span><span>Let me first start off by saying that <a href="http://www.ayende.com/projects/rhino-mocks.aspx">Rhino Mocks</a> is a great mock objects framework for unit testing in .NET and C#. As compared with <a href="http://sourceforge.net/projects/nmock2/">NMock2</a>, which was my first experience with testing using mock objects, it is far superior (the use of strongly typed method/property names instead of strings is one of its best features especially for TDD and refactoring.) However, there are some aspects of NMock2 that I do miss.<br /><br /><span style="font-weight: bold;">'Expect' Consistency</span><br /><br />For starters, NMock2 was more consistent in how the 'Expect' calls are made versus the way Rhino Mocks does it. In NMock2, the use of 'Expects' are the same whether you use a void method or a method that returns a value:<br /><pre class="csharpcode">Expect.Once.On(mockFoo).Method(<span class="str">"SomeMethodThatReturnsAValue"</span>)<br />Expect.Once.On(mockFoo).Method(<span class="str">"SomeVoidMethod"</span>)<br /></pre>That is not the case with Rhino Mocks. 'Expects' can only be used with methods that return values and <span style="font-weight: bold;">not</span> with void methods.<br /><br />Recently, <a href="http://ayende.com/Blog/archive/2007/10/17/Rhino-Mocks-Void-methods-using-Expect.Call.aspx">a new way of expressing 'Expects' with void methods</a> was added to the Rhino Mocks framework but it relies on 'delegates'. Not sure if I really like the solution. It trades off one form of weak readability for another albeit different one.<br /><br /></span><span>This could be yet another reason to turn off newbies from testing with a mock framework such as Rhinos. It can be confusing. It is already quite a difficult endeavor to encourage software developers the virtues of unit testing. It is even more difficult to </span><span>promote </span>mock object testing so anything to lower the barriers is important and critical.<br /><span><br /><span style="font-weight: bold;">Understandable Exception Messages</span><br /><br />In addition, Rhino Mock exception messages sometimes can be vague and unclear. This can be frustrating for new (and even existing) users.<br /><br />For example, I was recently working on an old test fixture for a project which uses <span class="nfakPe">NMock2</span> and not Rhinos as its testing framework. To some degree, I felt a bit more productive with and in control of it because the error messaging is a lot more user friendly. I could more quickly determine the cause of a problem.<br /><br />For example, below is an actual exception I received from <span class="nfakPe">NMock2</span>:<span style="color: rgb(0, 0, 153);" class="nfakPe"><!-- code formatted by http://manoli.net/csharpformat/ --><br /><pre class="csharpcode"><br />NMock2.Internal.ExpectationException: not all expected invocations<br />were performed<br />Expected:<br />1 time: criteria.SetFirstResult(equal to <50>) [called 0 times]<br />1 time: criteria.SetMaxResults(equal to <5>) [called 0 times]<br />1 time: criteria.List(any arguments), will <span class="kwrd">return</span><br /><System.Collections.Generic.List`1[System.DateTime]> [called 0 times]</pre></span><div id=":1rd" class="ArwC7c ckChnd"><br />Now, here is what I might get from Rhinos:<span style="color: rgb(0, 0, 153);"><pre class="csharpcode"><span style="color: rgb(0, 0, 153);">Rhino.Mocks.Exceptions.ExpectationViolationException:</span><br /><span style="color: rgb(0, 0, 153);">ICriteria.SetFirstResult(50); Expected #1, Actual #0.</span><br /><span style="color: rgb(0, 0, 153);">ICriteria.SetMaxResults(5); Expected #1, Actual #0.</span></pre></span><br />Honestly, I like the first one better. It reads better to me. For one thing, Rhinos provides the raw (CLR?) object definition so that if the member is inherited from an interface or another class then it shows as it is defined for interface (i.e. "ICriteria") or <span><span>the base class </span></span>. Meanwhile, <span class="nfakPe">NMock2</span> shows the actual local variable name used in the code you are testing (i.e. "criteria"). Much faster to pinpoint the culprit.<br /><br />In fact, where this really drives me crazy is for the <a href="http://en.wikipedia.org/wiki/Domain_object">domain objects</a> (i.e. POCOs, business objects, etc.) for that same project. Every domain object inherits from IDomainObject so with Rhino Mocks I get this:<br /><pre class="csharpcode">IDomainObject.Description<br /></pre>OK....but, which domain object is it? If I happen to have two or more domain objects being mocked/stubbed in my test it can get really hard figuring out the one it's complaining about. Instead, it would be nice if Rhino provided the following as does <span class="nfakPe">NMock2</span> using the variable name (assuming my domain object is named 'Foo'):<br /><pre class="csharpcode">foo.Description<br /></pre>Another example of the disparity between the two frameworks is if a property related exception occurs then the Rhino message would contain this:<br /><pre class="csharpcode">IFooView.set_PageSize<br /></pre>while <span class="nfakPe">NMock2</span> would provide this:<br /><pre class="csharpcode">_view.PageSize \\ instance variable name<br /></pre>Some would say, "What's the big deal?", "Can't you figure out what it is?", "It only takes a few seconds to know what it is", etc. Well, that is the problem. If my brain has to stop to process what it is, even if it takes a few seconds, then that is slowing me down during my software development process. Multiply those "few" seconds by how many times you get Rhino exceptions like that and it does eat away at your development time. It does add up over time. It is not unlike trying to read code that is not very readable or well factored. Sure, you'll eventually figure out what it does but at the cost of precious dev time.<br /><br /></div><span style="font-family:georgia;">The following exception message is one I'm fairly certain I have gotten before but always forget because the message is so...well...CRYPTIC!!!! </span><span style="font-size:100%;"><br /></span> <span style="font-family:Courier New;"><span style="font-weight: bold;font-family:Trebuchet MS;font-size:100%;" ><blockquote>System.<wbr>InvalidOperationException: Previous method 'IView.get_<wbr>ReturnSomeStringValue();' require a return value or an exception to throw.</blockquote></span><span style="font-family:georgia;">If you specify the wrong data type in the 'Return' method of an Expect (or LastCall) then the above exception will be thrown. For example, if the method or property is suppose to return a 'string' type value but you instead specify a 'DateTime' type as shown below:</span></span><br /><pre class="csharpcode">DateTime date = DateTime.Today<br />Expect.Call(_view.ReturnSomeStringValue).Return(date);<br />// <span class="kwrd">this</span> will <span class="kwrd">throw</span> an exception</pre><span style="font-family:Courier New;"><span style="font-family:georgia;">then you will receive the error message mentioned earlier.</span><br /><br /><span style="font-family:georgia;">Specifying the proper type should fix the problem as follows:</span></span><br /><pre class="csharpcode"><span class="kwrd">string</span> someStringValue = <span class="str">"some string value"</span>;<br />Expect.Call(_view.ReturnSomeStringValue).Return(someStringValue);<br />// <span class="kwrd">this</span> <span class="kwrd">is</span> ok</pre><span style="font-family:Courier New;"><span style="font-family:georgia;">The exception message should really be about checking for strong typing and not the absence or lack of a return value.</span></span></span>Unknownnoreply@blogger.com6tag:blogger.com,1999:blog-8416807644832025349.post-23952049489554053912008-08-29T06:03:00.000-07:002008-09-10T01:06:38.194-07:00"Make Something People Want"I came across <a href="http://online.wsj.com/article/SB121849293102231361.html" target="_blank">this article in the Wall Street Journal</a> regarding issues that sellers have been having with eBay. This reminded me of a recent conversation I had <a href="http://blog.sneal.net/blog/default.aspx">with another software developer friend of mine</a> regarding how craigslist could be improved with...something else.<br /><br />If a potential competitor of eBay were to read about its users' woes then it would definitely <a href="http://blogs.wsj.com/independentstreet/2008/08/12/four-big-gripes-of-ebay-sellers/" target="_blank">be a nice blueprint</a> on how to build something better. Why? Because it is "<a href="http://www.paulgraham.com/good.html" target="_blank">something people want</a>" but are not getting.<br /><br />I don't know if eBay's problems could necessarily be solved via technology (although some looked to be that way). But, it serves to show that just when you think something has been "solved" then think again. Has online auctioning really been "solved"? Has online "classified ads" been solved? Has "[fill in the blank]" been solved?<br /><br />Classic example is Google. When they first entered the search engine market everyone believed that search engines have been "solved" and it could not possibly succeed against the more "mature" search engines available at the time...well, we all know how that story ends. But even with Google what it is today, still, you can not assume that search engines have been truly solved. For example, <a href="http://www.cuil.com/" target="_blank">there are folks that do feel it is <b>not</b> solved and therefore are attempting to take on Google</a>. But given Google's own beginnings it does not sound strange at all and should serve as an illustration that it can be done (or at least attempted).<br /><br />If you do want to build something people actually want then keep some of following things in mind in respect to your competitor:<br /><ul><li><b>Keep your features list smaller</b>- More features != better product. Most people think this makes for a better product but that is a complete fallacy that has been proven time and again. Keep it simple and focused on features that people actually would use. It does not matter how many features your competitors has in their product. It only makes their product worse and harder to use. Check out the book '<a href="http://www.amazon.com/Inmates-Are-Running-Asylum/dp/0672316498" target="_blank">The Inmates Are Running the Asylum</a>' to get a better idea as to why this is true.<br /></li><li><b>Make it easier to use</b>- Sounds "easy" but it is not. You really need to cut down on the amount of friction it takes in using your software as compared to your competitor. Make it goal oriented and not task oriented. Check out the previously mentioned book as well as '<a href="http://www.amazon.com/Dont-Make-Me-Think-Usability/dp/0321344758/ref=pd_sim_b_3/103-8909643-5263801" target="_blank">Don't Make Me Think!</a>' for infinitely better explanations than mine.</li><li><b>Stay smaller and leaner- </b>Your main goal is your product. Try not to worry about building an empire. Do not focus on other unrelated products that you could make. Do not aim to take the whole market because in some markets even as low as a 5% share can be quite lucrative (you might be serving some niche that is not being adequately satisfied with whoever is the current market leader). Do not hire extraneous folks which only serve to drag down your costs. And so forth...</li><li><b>Ignore your competitors-</b> Seems like a contradiction to the previous ones but not really. Who cares what your competitor is doing? Care about what they are <u><i>not</i></u> doing and what that means for the users.<br /></li></ul>Finally, as a software developer/engineer/programmer/<wbr>hacker what can you do? I suggest you start <a href="http://www.paulgraham.com/articles.html" target="_blank">here</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-55088929206504552362008-08-26T22:08:00.000-07:002010-01-11T23:59:45.821-08:00Mocks vs Traditional Asserts<span style="font-style: italic;"></span>I encountered something extremely interesting regarding assertions versus mocks. Traditional assertions (e.g. the NUnit asserts) are typically for "state-based" type testing while mocks are for "interaction-based" type testing.<br /><br />I find that I am using assertions less and less. I'm not sure if that is a good or bad thing. A few weeks ago, I was working on adding some functionality to one of the domain objects for the project at work which unlike for Controller classes in MVC tend to be more state based testing than interaction based. At least that's what I thought.<br /><br />Below is a simple method that I needed to test:<br /><pre class="csharpcode"><span class="rem">// domain object- TaskManager</span><br /><span class="kwrd">public</span> IList<Task> Reassign(IList<Task> tasks, <span class="kwrd">string</span> newTeam)<br /> {<br /> <span class="kwrd">foreach</span> (Task task <span class="kwrd">in</span> tasks)<br /> {<br /> task.Team = newTeam;<br /> }<br /><br /> <span class="kwrd">return</span> tasks;<br /> }</pre>The following test uses traditional assertions:<br /><pre class="csharpcode"> [Test]<br /> <span class="kwrd">public</span> <span class="kwrd">void</span> CanReassignTasksToNewTeamWithAsserts()<br /> {<br /> TaskManager manager = <span class="kwrd">new</span> TaskManager();<br /> <br /> <span class="kwrd">const</span> <span class="kwrd">string</span> oldTeam = <span class="str">"Old Team"</span>;<br /> <span class="kwrd">const</span> <span class="kwrd">string</span> newTeam = <span class="str">"New Team"</span>;<br /><br /> <span class="rem">// setup test data for tasks</span><br /> Task task;<br /> <span class="kwrd">for</span> (<span class="kwrd">int</span> idx = 0; idx < 3; idx++)<br /> {<br /> task = <span class="kwrd">new</span> Task();<br /> task.Team = oldTeam;<br /> tasks.Add(task);<br /> }<br /><br /> <span class="rem">// assert the re-assignment</span><br /> IList<Task> updatedTasks = manager.Reassign(tasks, newTeam);<br /> <span class="kwrd">foreach</span> (Task updatedTask <span class="kwrd">in</span> updatedTasks)<br /> {<br /> Assert.That(updatedTask.Team, Is.EqualTo(newTeam), <span class="str">"The task's team was not re-assigned."</span>); <br /> }<br /> }<br /></pre><br />Now here is another test whose intention is to test the exact same thing but using mocks instead:<br /><!-- code formatted by http://manoli.net/csharpformat/ --><br /><pre class="csharpcode"><br />[Test]<br /> <span class="kwrd">public</span> <span class="kwrd">void</span> CanReassignTasksToNewTeamWithMocks()<br /> {<br /> TaskManager manager = <span class="kwrd">new</span> TaskManager();<br /><br /> <span class="kwrd">const</span> <span class="kwrd">string</span> newTeam = <span class="str">"New Team"</span>;<br /><br /> <span class="rem">// setup test data for tasks and set expectations</span><br /> Task task ;<br /> <span class="kwrd">for</span> (<span class="kwrd">int</span> idx = 0; idx < 3; idx++)<br /> {<br /> task = Mocks.CreateMock<Task>();<br /> tasks.Add(task);<br /><br /> <span class="rem">// set expectation to assign task to new team</span><br /> task.Team = newTeam;<br /> LastCall.Repeat.Once();<br /> }<br /><br /> Mocks.ReplayAll();<br /><br /> manager.Reassign(tasks, newTeam);<br /><br /> Mocks.VerifyAll();<br /> }</pre><br />Guess which one I wrote first? Of course the one with mocks even though I started with the complete intention of doing it with state-based assertions but it quickly morphed to using mocks.<br /><br />It was really interesting to produce these two tests that are functionally different but accomplish the same goal. They both "fail" if you remove the line:<br /><br /><pre>task.Team = newTeam;<br /></pre><br />or if you place a different value:<br /><br /><pre>task.Team = "Make this test fail.";<br /><br /></pre>Since they fail by doing either of the above that means both tests are good, valid tests, right?<br /><br />So, which should I use? Truthfully, the test with mocks is less brittle because you do not need a real instance of the "Task" object. But am I taking it too far? One "problem" I seem to have is that since I have been using mocks for so long my mind is wired to use them for everything (once again, is that a good thing or an anti-pattern?) Basically, when I think about how to test something I immediately think in terms of expectations with dependencies.<br /><br />Perhaps mocks win out in this situation and in most it is better because true unit testing means that the only real instance of an object is the one that you are trying to test and essentially everything else should be mocked and/or stubbed somehow. Perhaps assertions are best with objects that are not primarily defined by their dependencies and that simply perform complex algorithms that return value types results (for example, static classes and methods). Of course, I could be oversimplifying that but I find it really hard to know when to use plain vanilla assertions.<br /><br />Well, shortly after stumbling upon this "dilemma" on my own, I then read Martin Fowler's article named <a href="http://martinfowler.com/articles/mocksArentStubs.html" target="_blank">Mocks Aren't Stubs</a> and it became much clearer to me what I was doing and why (as it always seem to happen whenever I read any of Fowler's stuff). According to him, I would be classified as a "mockist TDD practitioner".<br /><br />Honestly, some of the reasons he lists for choosing not to be one (as opposed to a "classical TDD practitioner") are things that I definitely felt on my own especially recently when I was struggling with writing a bunch of mock heavy tests that started to get unwieldy and far more complex than the thing I was actually testing (let's just say it was a weird, dark period in my recent dev efforts that I was really questioning the use of mocks).<br /><br />The quote below from him is definitely something that I thought to myself off and on for as long as I have been doing "mock testing":<br /><br />"...A mockist is constantly thinking about how the SUT ["system under test" a.k.a. the object under test] is going to be implemented in order to write the expectations. This feels really <span style="font-style: italic;">unnatural</span> to me..."Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-8416807644832025349.post-5935306891473465922008-08-25T23:49:00.000-07:002022-10-18T11:58:08.356-07:00Applying good software development practices to VBA and MS Access (and tools, too!)<span style="font-style: italic;"></span>After working with TSQL almost exclusively for the last six months the time came last month where I had to make code changes to a Microsoft Access front-end application at work.<br /> <br />Based on what I have learned in the past several years or so regarding software development in .NET, I decided to arm myself for tackling legacy code in a platform and language notoriously known for not being ideal for software development.<br /> <br /><b>IDE Tools</b><br /><br />I accidentally came across <a href="http://www.mztools.com/index.aspx" target="_blank">MZ-Tool</a> a plugin for the VB Editor IDE and wow! This made all the difference in the world. It was nice to find a free tool to inject a little <a href="http://www.jetbrains.com/resharper/">ReSharper</a>-ish support into the rigid VBA IDE. The product apparantly started in the VB\VBA world and when .NET came along <a href="http://www.mztools.com/v6/mztools6.aspx" target="_blank">it was ported to Visual Studio.NET</a>. (It can actually be considered as a direct competitor to Re#. Note that the .NET version is <u>not</u> free.)<br /> <br />Here are the features I was using actively during coding:<br /><ul><li><a href="http://www.mztools.com/v3/onlinehelp/html_replace_in_all_projects.htm" target="_blank"><span>Replace In All Projects</span></a>- This was huge in that it allowed me to actually perform the refactoring technique, 'Rename', with various levels of control. What really made it possible is the <a href="http://www.mztools.com/v3/onlinehelp/html_results_window.htm" target="_blank">actual visual tree view of your code</a> where the text you want to change shows up as well as showing what function/sub where it can be find under. I was not afraid to actually change names of things that were either goofy, confusing or antiquated in coding style into much more meaningful names. In the end, I had significantly less fear in breaking code. (This feature is equivalent to Re# 'Find Usages Advanced')<br /></li><li><a href="http://www.mztools.com/v3/onlinehelp/html_procedure_callers.htm" target="_blank">Procedure Callers</a>- Similar to Re# 'Show all usage" for a method or variable (and it's also navigable via a visual tree view)</li><li><a href="http://www.mztools.com/v3/onlinehelp/html_code_templates.htm" target="_blank">Code Templates</a>- This allows me to create and permanently store boilerplate code that I needed for testing code using VBLiteUnit (see next section). This feature opened my eyes to more actively use and create with 'Live Templates' in Re#.</li><li><a href="http://www.mztools.com/v3/onlinehelp/html_add_procedure.htm" target="_blank">Add Procedure</a>- Slick and fast way to create more boiler-plate code for procs/func.</li><li><span><a href="http://www.mztools.com/v3/onlinehelp/html_add_error_handler.htm" target="_blank">Add Error Handler</a>- This was a gift from the heavens. Being able with a simple keystroke drop in code for error handling in any sub/function. Error handling is horrible in VB/VBA so this significantly helped in reducing the pain.</span></li><li><span><a href="http://www.mztools.com/v3/onlinehelp/html_add_module_header.htm" target="_blank">Add Module Header</a> and <a href="http://www.mztools.com/v3/onlinehelp/html_add_procedure_header.htm" target="_blank">Add Procedure Header</a>- It's just like having <a href="http://www.roland-weigelt.de/ghostdoc/">GhostDoc</a>. Adding comments is very important in legacy code.</span></li><li><span><a href="http://www.mztools.com/v3/onlinehelp/html_sort_procedures.htm" target="_blank">Sort Procedures </a>-Helps organize your code. It is equivalent to Re#'s 'File Structure Popup' which I use all the time for the same reasons: drag-n-drop your code to how you see fit.<br /> </span></li><li><span><a href="http://www.mztools.com/v3/onlinehelp/html_private_clipboards.htm" target="_blank">Private Clipboard</a>- This can be big in legacy code and in a language where you do end up with a lot of cutting and pasting. Actually, better than Re#'s version because you can actually control what is stored in it and can retain it for as long as you like within the duration of your session.<br /> </span></li><li><a href="http://www.mztools.com/v3/onlinehelp/html_review_source_code.htm" target="_blank">Review Source Code</a>- A extremely limited version of code analysis in Re#. It only tells you if a variable, constant, procedure is not being used. But good nonetheless to clean up your code and get rid of cruft.<br /> </li></ul>The following features were interesting but less used day to day:<br /><ul><li><a href="http://www.mztools.com/v3/onlinehelp/html_xml_documentation.htm" target="_blank">Generate XML Documentation</a>- produces a very nice readable xml doc about your code comparable to XML CHM output in VS</li><li><a href="http://www.mztools.com/v3/onlinehelp/index.html" target="_blank">Statistics</a>- You can actually see how your lines of code is distributed in your code base.<br /></li></ul>The one thing I wish <span class="nfakPe">MZTools</span> had which IMHO I consider to be one of the two most important refactoring techniques is '<a href="http://www.refactoring.com/catalog/extractMethod.html">Extract Method</a>' ('<a href="http://www.refactoring.com/catalog/renameMethod.html">Rename Method</a>' being the other). If it had that it would be quite the rock solid refactoring tool for VBA. Nonetheless, MZTool just like Re# in Visual Studio has given me control over my code instead of the code controlling me.<br /><br /><b>Unit Testing</b><br /><br />Being a dedicated practitioner of Test Driven Development (TDD), this was important for me to do on a lot levels. After looking at two options I settled on <a href="http://vb-lite-unit.sourceforge.net/" target="_blank">VBLiteUnit</a> because it is extremely lightweight and the ramp up learning time is short especially if you are familiar with any of the other <a href="http://en.wikipedia.org/wiki/XUnit" target="_blank">xUnit frameworks</a> (which is exactly what the author intended in creating it hence the word "Lite" in its name. My belief is that the author might have thought that the existing one out there, <a href="http://sourceforge.net/projects/vbaunit/" target="_blank">VBAUnit</a>, was too bulky and cumbersome to use and maintain. Definitely a much more "pragmatic" and "agile" approach in his solution.)<br /> <br />The author decided to use the VB construct '<b>Select Case</b>' (think 'Switch') with each leg of your 'Case' defining an individual test. Impressively it works really well (and added bonus is that your test descriptions can be more descriptive and natural in tone because it's just string text. Interesting to note as compared with <a href="http://blog.sneal.net/Blog/UnitTestNamingConventions.aspx" target="_blank">a blog post on Unit Test Naming Conventions written by </a><a href="http://blog.sneal.net/Blog/UnitTestNamingConventions.aspx" target="_blank">a developer I used to worked with</a>)<br /> <br />To implement my new changes, as expected, I did have to do some refactoring of the code that required to be touched. This generally resulted in new testable classes but I had TDD with VBLiteUnit (and <span class="nfakPe">MZTools</span>) to lead the way.<br /><br />All in all it was very satisfying to be able to actually do TDD in Access/VBA. It gave me that same feeling that I get when doing TDD in C#. The platform for a moment felt much more real as software development than it ever had before.<br /> <br /><b>Results</b><br /><br />Some lessons were learned of course ("evolve or die"). If you have no choices, resources, etc. in a situation no matter how "trivial" then you still attack your problem 100%. Why? Because you not only get to apply and transfer knowledge and techniques to another area of software development but also you might actually learn some new things that you can in turn use later on. For example,<br /> <ul><li>I started using the Immediate Window much more now in Visual Studio.</li><li>I started using features in Re# that I had not used or dismissed before (like the live templates, copy clipboard, etc.)</li><li>Design patterns can still be used even with what some might consider a "rudimentary" language. For example, I was actually able to implement a variation of the <a href="http://martinfowler.com/eaaCatalog/repository.html" target="_blank">Repository pattern</a> (ala Eric Evan's DDD) that hydrated a domain object and make it work. Hence if you understand design patterns and how to use them then it does not matter what OO language you are using.<br /> </li></ul>However, I can see why VB developers don't know OOP. One good reason is that VB is lacking in some key features of an OOP language. The biggest one in my opinion is that it does not support <a href="http://en.wikipedia.org/wiki/Implementation_inheritance" target="_blank">implementation inheritance</a>. That was frustrating because I was trying to use a technique that Michael Feathers described in his book, <a href="http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052">Working Effectively with Legacy Code</a>, to test legacy code. That entails creating a seam in your code by inheriting and then overriding an external dependency. Unfortunately, VB simply just enforces the interface contract but does not carry over the implementation code from the base class to its derived ones.<br /> <br />Nonetheless, using VBA with VBUnitLite and <span class="nfakPe">MZTools</span> had actually turn out to be a much more gratifying and, yes, may I even say aloud, "fun" experience than I expected. The main reason beyond the more obvious such as being to do TDD was that I was doing so much TSQL just prior that is was like being transported out of the stone ages. The TSQL language and Microsoft's IDE for it (SQL Server Mgt Studio) was so frustrating to use that working back in a programming language designed for applications even as "minor/toy" and with so much stigma as Access VBA was such a breath of fresh air and extremely invigorating (It was like giving a tablespoon of water to a very thirsty person).<br /> <br />TSQL was and still is such a struggle even with <a href="http://tsqlunit.sourceforge.net/">TSQLUnit</a>. It has so much limitations when it comes to creating flow logic, reusable code, and testability. (Granted it is a DSL specialized and intended for manipulation of data and its storage structure but I feel like that the SQL language has not really evolved much in its odd 30 years of existence. In areas that it has changed it ends up resembling other non-SQL languages so what's the point? Perhaps it's time for a better replacement language(s)?) In the future, any application developer that I encounter again who says "sprocs are the way to go" will get an earful and maybe more.<br /> <br />TSQL is so painful in its existing form that, believe it or not, I rather be working in VBA than in TSQL if those were my only options!! I know that sounds crazy and shocking but that's how much I prefer not to have to do database development and deal with TSQL. I'll leave it to the folks who actually enjoy it. But sadly it is the bulk of my work these days. (The one thing I do like about it is it's language support for "dynamic SQL" but that is not enough of a motivation for me.)<br /><br />Who knows? I went from C# --> SQL --> VBA --> SQL in the last six months. If I had gone from C# --> VBA I probably would have a different opinion and experience. Hard to tell in retrospect.<br /> <br />Don't worry I still do C# and have been doing it for the last six months in parallel on side projects and supporting processes at work. Unfortunately, I am back to doing TSQL for the next several months. Aargh!!!<br /><br /><b>Final Thoughts</b><br /><br />What was the point of all of this other than to rant on about a language no one cares about if you consider yourself a "serious" programmer? Well, just like VB6, <a href="http://www.adtmag.com/article.aspx?id=20592" target="_blank">Microsoft plans on retiring VBA</a> and possibly allowing Office apps to be supported by any.NET language including C#!! Therefore, the tools mentioned above are "dead" in the sense no new development is going be done on them especially that their respective authors have appeared to have moved on from doing any new releases.<br /> <br />IMHO, I think what Microsoft <i>really</i> should do is to allow <b>dynamic languages</b> to support their Office apps. That is where dynamic languages (ex. Ruby, Python, BOO, etc.) seem like such a natural fit and obvious for the nature of that work. Just imagine you can write code very quickly with ever-changing requirements and it does not have to perform fast (as compared with statically typed languages.) Seems like such a "no-brainer". (maybe in the far, not-too-distant future, power users could even be writing MS Excel or Word macros in <b>F#</b>? Just imagine that!)Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-8416807644832025349.post-48947432650933492602008-07-27T22:47:00.000-07:002010-01-11T23:59:45.824-08:00Snake bitten by Python (R.I.P. NAnt)After first experimenting last year with <a href="http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython">IronPython</a>, the .NET port of <a href="http://www.python.org/">Python</a>, I decided to take the full plunge into Python itself leading to another major milestone in my programming career. Why? Because I now have an incredibly handy language, Python, that can superbly manage rudimentary but necessary development tasks. Furthermore, Python has enlightened me to yet another way of thinking about how code can be written.<br /><br />As a dynamic language, Python can be extremely powerful. It can be used for "glue" tasks like scripting but its potential is even greater. By being an interpreted language, pieces of your code can both be written and tested practically at the same time via its interpreter console (as if the Immediate Window in Visual Studio were the means to simultaneously see how your code works <i>while you are writing it</i>. No "<a href="http://www.codinghorror.com/blog/archives/000860.html">compilation tax</a>".) In addition, Python can do both OOP (e.g. classes, et al) as well as functional programming (e.g. treating functions like data in lists and supporting lambdas like Lisp). Finally, with its leaner and less verbose syntax, less code is written as compared with other static languages.<br /><br />After a few days to ramp up and get acquainted with the language, I immediately started to implement Python on a few things. <a href="http://nant.sourceforge.net/">NAnt</a> build scripts and Windows bat files were the main targets for Python conversion. I also intend on rewriting in Python a C# .NET console tool that merges the content of multiple files into a single one. It seemed more natural and sensible to use Python for these types of development tasks.<br /><br />Replacing NAnt with Python is favored since NAnt is an XML-based <a href="http://en.wikipedia.org/wiki/Domain_Specific_Language">DSL</a> that might be doing a little too much. The problem is not that it is a DSL, generally a good thing particularly when done right (as the Ant/NAnt folks succeeded quite well in doing to their credit), but the part of it being "XML-based". <a href="https://rhino-tools.svn.sourceforge.net/svnroot/rhino-tools/trunk/default.build">Who really wants to program all day in XML?</a> <a href="http://en.wikipedia.org/wiki/Apache_Ant">Apache Ant</a>, NAnt in the Java world, was a victim of the exploding popularity of XML during the height of the dot com era web applications. XML should be left to do what it does best and what it was originally intended for: basic structured data storage and configuration. Ant (and, subsequently, NAnt) should not have mixed the following two: (1) formatting/organizing data and (2) build flow logic. Not a far cry from violating the principle "<a href="http://en.wikipedia.org/wiki/Separation_of_concerns">separation of concerns</a>".<br /><br />If I can avoid it, I am <b>finished</b> with NAnt (or any other equivalent build frameworks that rely heavily on XML for its flow logic). If given the choice in a development environment, I probably not opt to use NAnt to handle build scripts. Not that I have anything against NAnt itself, just that better, more programmer friendly alternatives exist. NAnt was (and still is) a great option as compared, say, with the inferior Windows bat files or with the dev shops that manually build their projects via the Visual Studio. Instead, a dynamic language like Python (or <a href="http://boo.codehaus.org/">BOO</a> or <a href="http://www.ruby-lang.org/en/">Ruby</a> or whatever else) is preferable to manage this type of work. (A few build automation frameworks do exist written in Python, but I will like to take a look at the promising, .NET born <a href="http://code.google.com/p/boo-build-system/">BOO build system</a>.)<br /><br />NAnt documentation mentions that <a href="http://nant.sourceforge.net/release/latest/help/introduction/fog0000000079.html">it has the advantage over native OS shell commands</a> because it is "cross-platform". That might be true, but Python has that area easily covered specifically with its '<a href="http://docs.python.org/library/os.html">os</a>' and '<a href="http://docs.python.org/library/shutil.html">shutil</a>' modules. Portability is one of Python's key features.<br /><br />Code generation of other programming languages is another area where I also started to use Python. Database change scripts written in TSQL that are repetitive and voluminous have benefited significantly from using Python (as one example, creating structurally similar 'drop column' statements for 100+ columns). In the future, for other kinds of code generation (e.g. <a href="http://www.hibernate.org/hib_docs/nhibernate/html/mapping.html">NHibernate mapping files</a> is one example), I will definitely consider Python as a substitute for heavier code-gen tools such as <a href="http://www.mygenerationsoftware.com/portal/default.aspx">MyGeneration</a>.<br /><br />The more I use Python, the more I am convinced that it will be employed as my general, all-purpose utility programming language. I intend to use it as a vital supporting player handling the grunt work in my development processes. It does not matter what the primary language happens to be whether C#, TSQL, etc. By and large, I just like how quick and dirty a script can be whipped up to perform some auxiliary task without having to endure the overhead of compiling, creating, and running some executable file. (Who knows? Maybe one day I can work on a major project where Python is the star of the show.) All in all, it is just such a nice clean, readable language making it far more enjoyable to work with as compared with something like NAnt.<br /><br />To give an idea on how visually different it is to use Python over NAnt, below are the code of two identical build scripts written in each language. This the first script I ported over to Python. The script runs the SQL Server <a href="http://www.microsoft.com/downloads/details.aspx?familyid=56E5B1C5-BF17-42E0-A410-371A838E570A&displaylang=en">Database Publishing Wizard</a> to generate a file that contains the sql to create the schema of a baseline database required at the start of each development cycle. The following high level tasks are executed by the script:<br /><ul><li> Create Schema Script- Generates the raw initial tsql schema script from target database using the DB pub wiz</li><li> Convert Script File Encoding- Convert file from Unicode to ASCII</li><li> Replace Script Values- Read from external csv file containing pairs of strings to replace in script.</li><li> Checkout File From Source Control - Checkout from <a href="http://www.perforce.com/">Perforce</a> the existing schema file that will be replaced.</li><li> Copy File To Build Location- Move sql script file to build directory</li><li> Build Database And Run Unit Tests- Run another separate script (currently written in NAnt) that builds the db and runs the unit tests using <a href="http://tsqlunit.sourceforge.net/">TSQLUnit</a><br /></li></ul><b>NAnt Version</b><br /><pre class="csharpcode"><span class="kwrd"><?</span><span class="html">xml</span> <span class="attr">version</span><span class="kwrd">="1.0"</span>?<span class="kwrd">></span><br /><span class="kwrd"><</span><span class="html">project</span> <span class="attr">name</span><span class="kwrd">="Generic Database Build"</span> <span class="attr">default</span><span class="kwrd">="BaselineDatabaseCreation"</span><br /> <span class="attr">xmlns</span><span class="kwrd">="http://nant.sf.net/release/0.85/nant.xsd"</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="base.dir"</span> <span class="attr">value</span><span class="kwrd">=".\"</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span> <span class="attr">readonly</span> <span class="kwrd">="false"</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="sourceDB"</span> <span class="attr">value</span><span class="kwrd">=""</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="sourceServer"</span> <span class="attr">value</span><span class="kwrd">=".\sqlDev2005"</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="dbmsVersion"</span> <span class="attr">value</span><span class="kwrd">="2000"</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="connectionString"</span> <span class="attr">value</span><span class="kwrd">="Server=${sourceServer};Database=${sourceDB};Trusted_Connection=True;"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="dbBuild.dir"</span> <span class="attr">value</span><span class="kwrd">=""</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="targetDB"</span> <span class="attr">value</span><span class="kwrd">=""</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="targetServer"</span> <span class="attr">value</span><span class="kwrd">=".\sqlDev2005"</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="sqlScriptingTool.dir"</span> <span class="attr">value</span><span class="kwrd">="C:\Program Files\Microsoft SQL Server\90\Tools\Publishing\"</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="sqlScript.fileName"</span> <span class="attr">value</span><span class="kwrd">="CreateSchema.sql"</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="sqlScript.filePath"</span> <span class="attr">value</span><span class="kwrd">="${path::combine(base.dir, sqlScript.fileName)}"</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="sourceControl.filePath"</span> <span class="attr">value</span><span class="kwrd">="${dbBuild.dir}Schema\${sqlScript.fileName}"</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="rem"><!-- replace values list variable --></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="temp.fileName"</span> <span class="attr">value</span><span class="kwrd">="temp.txt"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="temp.filePath"</span> <span class="attr">value</span><span class="kwrd">="${path::combine(base.dir, temp.fileName)}"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="replaceValuesList.fileName"</span> <span class="attr">value</span><span class="kwrd">=""</span> <span class="attr">overwrite</span><span class="kwrd">="false"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="replaceValuesList.filePath"</span> <span class="attr">value</span><span class="kwrd">="${path::combine(base.dir, replaceValuesList.fileName)}"</span><span class="kwrd">/></span><br /><br /> <span class="kwrd"><</span><span class="html">target</span> <span class="attr">name</span><span class="kwrd">="BaselineDatabaseCreation"</span> <span class="attr">description</span><span class="kwrd">="Creates baseline database tsql script end-to-end."</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">call</span> <span class="attr">target</span><span class="kwrd">="CreateSchemaScript"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">call</span> <span class="attr">target</span><span class="kwrd">="ConvertScriptFileEncoding"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">call</span> <span class="attr">target</span><span class="kwrd">="ReplaceScriptValues"</span> <span class="attr">unless</span><span class="kwrd">="${replaceValuesList.fileName==''}"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">call</span> <span class="attr">target</span><span class="kwrd">="CheckoutFileFromSourceControl"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">call</span> <span class="attr">target</span><span class="kwrd">="CopyNewScriptFileToBuildLocation"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">call</span> <span class="attr">target</span><span class="kwrd">="GetSeedTablesData"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">call</span> <span class="attr">target</span><span class="kwrd">="BuildDatabaseAndRunUnitTests"</span> <span class="attr">unless</span><span class="kwrd">="${targetDB==''}"</span><span class="kwrd">/></span><br /> <span class="kwrd"></</span><span class="html">target</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">target</span> <span class="attr">name</span><span class="kwrd">="CreateSchemaScript"</span> <span class="attr">description</span><span class="kwrd">="Generates the raw initial tsql schema script from target database"</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">delete</span> <span class="attr">file</span><span class="kwrd">="${sqlScript.filePath}"</span> <span class="attr">if</span><span class="kwrd">="${file::exists(sqlScript.filePath)}"</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">exec</span> <span class="attr">program</span><span class="kwrd">="${sqlScriptingTool.dir}sqlpubwiz"</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">arg</span> <span class="attr">value</span><span class="kwrd">="script"</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">arg</span> <span class="attr">line</span><span class="kwrd">="-C ${connectionString}"</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">arg</span> <span class="attr">value</span><span class="kwrd">="${sqlScript.filePath}"</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">arg</span> <span class="attr">value</span><span class="kwrd">="-schemaonly"</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">arg</span> <span class="attr">line</span><span class="kwrd">="-targetserver ${dbmsVersion}"</span> <span class="kwrd">/></span><br /> <span class="rem"><!-- '-f' means overwrite existing files is true --></span><br /> <span class="kwrd"><</span><span class="html">arg</span> <span class="attr">value</span><span class="kwrd">="-f"</span> <span class="kwrd">/></span><br /> <span class="kwrd"></</span><span class="html">exec</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">fail</span> <span class="attr">message</span><span class="kwrd">="${sqlScript.filePath} was not created."</span><br /> <span class="attr">unless</span><span class="kwrd">="${file::exists(sqlScript.filePath)}"</span> <span class="kwrd">/></span><br /> <span class="kwrd"></</span><span class="html">target</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">target</span> <span class="attr">name</span><span class="kwrd">="ConvertScriptFileEncoding"</span> <span class="attr">description</span><span class="kwrd">="Convert file from Unicode to ASCII"</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">copy</span> <span class="attr">file</span><span class="kwrd">="${sqlScript.filePath}"</span> <span class="attr">tofile</span><span class="kwrd">="${temp.filePath}"</span> <span class="attr">outputencoding</span><span class="kwrd">="ASCII"</span> <span class="attr">overwrite</span><span class="kwrd">="true"</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">move</span> <span class="attr">file</span><span class="kwrd">="${temp.filePath}"</span> <span class="attr">tofile</span><span class="kwrd">="${sqlScript.filePath}"</span> <span class="attr">overwrite</span><span class="kwrd">="true"</span><br /> <span class="attr">unless</span><span class="kwrd">="${file::exists(replaceValuesList.filePath)}"</span> <span class="kwrd">/></span> <br /> <span class="kwrd"></</span><span class="html">target</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">target</span> <span class="attr">name</span><span class="kwrd">="ReplaceScriptValues"</span><br /> <span class="attr">description</span><span class="kwrd">="Read from external csv file containing pairs of strings to replace."</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">fail</span> <span class="attr">message</span><span class="kwrd">="${replaceValuesList.filePath} does not exist."</span><br /> <span class="attr">unless</span><span class="kwrd">="${file::exists(replaceValuesList.filePath)}"</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">foreach</span> <span class="attr">item</span><span class="kwrd">="Line"</span> <span class="attr">in</span><span class="kwrd">="${replaceValuesList.filePath}"</span> <span class="attr">delim</span><span class="kwrd">=","</span> <span class="attr">property</span><span class="kwrd">="x,y"</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">echo</span> <span class="attr">message</span><span class="kwrd">="Replacing '${x}' with '${y}'..."</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">copy</span> <span class="attr">file</span><span class="kwrd">="${temp.filePath}"</span> <span class="attr">tofile</span><span class="kwrd">="${sqlScript.filePath}"</span> <span class="attr">overwrite</span><span class="kwrd">="true"</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">filterchain</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">replacestring</span> <span class="attr">from</span><span class="kwrd">="${x}"</span> <span class="attr">to</span><span class="kwrd">="${y}"</span> <span class="kwrd">/></span><br /> <span class="kwrd"></</span><span class="html">filterchain</span><span class="kwrd">></span><br /> <span class="kwrd"></</span><span class="html">copy</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">copy</span> <span class="attr">file</span><span class="kwrd">="${sqlScript.filePath}"</span> <span class="attr">tofile</span><span class="kwrd">="${temp.filePath}"</span> <span class="attr">overwrite</span><span class="kwrd">="true"</span><span class="kwrd">/></span><br /> <span class="kwrd"></</span><span class="html">foreach</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">move</span> <span class="attr">file</span><span class="kwrd">="${temp.filePath}"</span> <span class="attr">tofile</span><span class="kwrd">="${sqlScript.filePath}"</span> <span class="attr">overwrite</span><span class="kwrd">="true"</span><span class="kwrd">/></span><br /> <span class="kwrd"></</span><span class="html">target</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">target</span> <span class="attr">name</span><span class="kwrd">="CheckoutFileFromSourceControl"</span> <span class="attr">description</span><span class="kwrd">="Checkout from source control the schema file that will be replaced."</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">fail</span> <span class="attr">message</span><span class="kwrd">="${sourceControl.filePath} does not exist."</span><br /> <span class="attr">unless</span><span class="kwrd">="${file::exists(sourceControl.filePath)}"</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">p4edit</span> <span class="attr">view</span><span class="kwrd">="${sourceControl.filePath}"</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">arg</span> <span class="attr">line</span><span class="kwrd">="-t"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">arg</span> <span class="attr">line</span><span class="kwrd">="text+k"</span><span class="kwrd">/></span><br /> <span class="kwrd"></</span><span class="html">p4edit</span><span class="kwrd">></span><br /> <span class="kwrd"></</span><span class="html">target</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">target</span> <span class="attr">name</span><span class="kwrd">="CopyNewScriptFileToBuildLocation"</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">copy</span> <span class="attr">file</span><span class="kwrd">="${sqlScript.filePath}"</span> <span class="attr">tofile</span><span class="kwrd">="${sourceControl.filePath}"</span> <span class="attr">overwrite</span><span class="kwrd">="true"</span> <span class="kwrd">/></span><br /> <span class="kwrd"></</span><span class="html">target</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">target</span> <span class="attr">name</span><span class="kwrd">="GetSeedTablesData"</span><span class="kwrd">></span><br /> <span class="rem"><!--TODO: Create a separate NAnt build script for this--></span><br /> <span class="kwrd"></</span><span class="html">target</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">target</span> <span class="attr">name</span><span class="kwrd">="BuildDatabaseAndRunUnitTests"</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">nant</span> <span class="attr">buildfile</span><span class="kwrd">="GenericDatabase.build"</span> <span class="attr">inheritall</span><span class="kwrd">="false"</span> <span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">properties</span><span class="kwrd">></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="base.dir"</span> <span class="attr">value</span><span class="kwrd">="${dbBuild.dir}"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="server"</span> <span class="attr">value</span><span class="kwrd">="${targetServer}"</span> <span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="database"</span> <span class="attr">value</span><span class="kwrd">="${targetDB}"</span><span class="kwrd">/></span><br /> <span class="kwrd"><</span><span class="html">property</span> <span class="attr">name</span><span class="kwrd">="includeUnitTesting"</span> <span class="attr">value</span><span class="kwrd">="true"</span> <span class="kwrd">/></span><br /> <span class="kwrd"></</span><span class="html">properties</span><span class="kwrd">></span><br /> <span class="kwrd"></</span><span class="html">nant</span><span class="kwrd">></span><br /> <span class="kwrd"></</span><span class="html">target</span><span class="kwrd">></span><br /><span class="kwrd"></</span><span class="html">project</span><span class="kwrd">></span><br /><br /></pre><b>Python Version</b><br /><pre>import os<br />import csv<br />import shutil<br /><br />sqlscripting_tool=r'C:\Program Files\Microsoft SQL Server\90\Tools\Publishing\sqlpubwiz.exe'<br />dbms_version = '2000'<br />connection_string = 'Server=' + source_server + ';Database=' + source_db + ';Trusted_Connection=True;'<br /><br />sqlscript_filename = 'CreateSchema.sql'<br />sqlscript_filepath = os.path.join(sqlscript_dir, sqlscript_filename)<br />source_control_filepath = os.path.join(db_build_dir, sqlscript_filename)<br /><br />replace_values_filepath = os.path.join(replace_values_dir, 'ReplaceList.csv')<br />error_found_message = 'Error found!'<br /><br />def run_script():<br /> """Run all tasks"""<br /> <br /> tasks = [create_schema_script, convert_scriptfile_encoding, replace_script_values,<br /> checkout_file_from_source_control, copy_file_to_build_location, build_database_and_run_unit_tests]<br /> for task in tasks:<br /> print 'Executing \'' + task.func_name + '\'... '<br /> is_successful = task()<br /> if not is_successful:<br /> print 'Script Failure!'<br /> break<br /> <br /> if is_successful:<br /> print 'Script Success!'<br /> <br />def create_schema_script():<br /> """Generates the raw initial tsql schema script from target database"""<br /> <br /> if os.path.isfile(sqlscript_filepath):<br /> os.remove(sqlscript_filepath)<br /><br /> args = ['sqlpubwiz', 'script', '-C ' + connection_string, '"' + sqlscript_filepath + '"',<br /> '-schemaonly', '-targetserver ' + dbms_version, '-f']<br /> os.spawnv(os.P_WAIT, sqlscripting_tool, args)<br /><br /> if os.path.isfile(sqlscript_filepath) == False:<br /> print error_found_message<br /> print "File '" + sqlscript_filepath + "' was not created."<br /> return False<br /><br /> return True<br /><br />def convert_scriptfile_encoding():<br /> """Convert file from Unicode to ASCII""" <br /><br /> cmd1 = 'type "' + sqlscript_filepath + '" > temp.txt'<br /> cmd2 = 'move temp.txt "' + sqlscript_filepath + '"'<br /> cmds = [cmd1, cmd2] <br /> for cmd in cmds:<br /> dos = os.popen(cmd) <br /> dos.read()<br /> dos.close()<br /><br /> return True<br /><br />def replace_script_values():<br /> """ Read from external csv file containing pairs of strings to replace values in sql script. """<br /><br /> # if 'replace values' list not provided then assume not needed<br /> if replace_values_filepath == '':<br /> return False<br /><br /> # check for 'replace values' file existence <br /> if os.path.isfile(replace_values_filepath) == False:<br /> print error_found_message<br /> print "Replace values list file '" + replace_values_filepath + "' does not exist."<br /> return False<br /><br /> # modify file content with new values<br /> f = open(sqlscript_filepath, 'r') <br /> text = f.read()<br /> replace_values = csv.reader(open(replace_values_filepath, 'r'))<br /> for row in replace_values:<br /> find_text = row[0]<br /> replace_with_text = row[1]<br /> text = text.replace(find_text, replace_with_text)<br /> f.close()<br /><br /> # write to script file with new values<br /> f = open(sqlscript_filepath, 'w')<br /> f.write(text)<br /> f.close()<br /><br /> return True<br /><br />def checkout_file_from_source_control():<br /> """ Checkout from source control the schema file that will be replaced. """<br /> <br /> # look for source control file <br /> if os.path.isfile(source_control_filepath) == False:<br /> print error_found_message<br /> print "Source control file '" + source_control_filepath + "' does not exist."<br /> return False<br /><br /> # checkout file (note: could use PyPerforce API framework instead)<br /> cmd = 'p4 edit -t text+k ' + source_control_filepath + ''<br /> p4 = os.popen(cmd)<br /> p4.read()<br /> p4.close()<br /><br /> return True<br /><br />def copy_file_to_build_location():<br /> """ Move sql script file to build directory """<br /><br /> shutil.copy(sqlscript_filename, source_control_filepath)<br /><br /> return True<br /><br />def build_database_and_run_unit_tests():<br /> """ Build database and validate schema by running unit tests """<br /><br /> nant_tool = os.path.join(base_dir, 'Tools\\NAnt\\bin\\', 'NAnt.exe')<br /> build_script_filepath = os.path.join(base_dir, 'Projects\\Libs\\Utils\\NAntScripts\\DatabaseBuilds\\', 'GenericDatabase.build')<br /> build_dir = os.path.split(os.path.normpath(db_build_dir))[0] # hack: need to remove 'Schema' folder; todo: need to remove this from generic db build script<br /> <br /> # todo: replace NAnt script with Python script<br /> args = ['NAnt', '-buildfile:' + build_script_filepath, '-D:base.dir=' + build_dir,'-D:server=' + target_server,<br /> '-D:database=' + target_db, '-D:installUnitTesting=' + 'true', ]<br /> os.spawnv(os.P_WAIT, nant_tool, args)<br /> <br /> return True<br /> <br />run_script()<br /><br /></pre>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-8416807644832025349.post-30788347718747998152008-07-21T23:18:00.000-07:002010-01-11T23:59:45.825-08:00Data Validation, Business Rules, and the Notification Pattern<span style="font-style: italic;"></span>On a previous project, I had encountered some unnecessarily long 'Save' methods in various ASP.NET web pages that contained numerous validations of each input value from the UI page. Within the body of those methods it would run through all of those vaildations before it would finally reach the decision as to whether to commit changes to the database or not. (for example, something like check the length of the first name of the user is less than 20, etc.) In general, the methods ended being a bit hard to follow especially if you needed to make a change to them.<br /><br /><a href="http://blog.sneal.net/Blog/default.aspx">Another developer who I used to work with</a> had mentioned to me data validation shouldn't even be in the Presenter Class of a traditional MVP/MVC implementation. He also mentioned his approach at the time (which if I recall correctly was something like exception guards?) as well as Jimmy Nilsson's approach towards data validation as described in his book, <a href="http://www.amazon.com/exec/obidos/ASIN/0321268202">Applying Domain-Driven Design and Patterns: With Examples in C# and .NET</a>. In the meantime, I had been recently researching how to do MVP with the <a href="http://asp.net/" target="_blank">ASP.NET</a> custom validators since we are using these on our project at work and was trying to find a "better" way to handle rudimentary validation.<br /><br />After some "blood, sweat, and tears" I think I was able to successfully apply <a href="http://www.martinfowler.com/eaaDev/Notification.html" target="_blank">Fowler's Notification Pattern</a> to solve this "issue". The notification pattern tries to manage the capturing of error messages as it relates to data validation that are specific to domain objects and are generally outputted to the end-user. (An example is if an email address is required on a submit form. If the user skips over that then on 'submit' a message is displayed such as "An email address is required...blah blah) It all started while re-reading <a href="http://codebetter.com/blogs/jeremy.miller/archive/2007/06/13/build-your-own-cab-part-9-domain-centric-validation-with-the-notification-pattern.aspx" target="_blank"> one of Jeremy Miller's post on validation as part of his CAB series </a>. His post lead to both Fowler's Notification Pattern and two posts from Jean-Paul S. Boodhoo's blog (<a href="http://www.jpboodhoo.com/blog/ValidationInTheDomainLayerTakeOne.aspx" target="_blank">Part I </a>and <a href="http://www.jpboodhoo.com/blog/ValidationInTheDomainLayerTakeTwo.aspx" target="_blank">Part II</a>). Those served as my blueprints for my implementation.<br /><br />Essentially, I went through each of their slightly differing approaches to see what I could use. The core of what I ended up with borrows heavily from Fowler with most of my changes just renaming things to suit my liking. Fowler always writes with such clarity and without the cruft and his code examples are so easy to follow that his version was the main driver for what I wanted to do. Miller's and JP took the pattern to another level but it was too much for what I wanted. My goal was to keep it simple of course and let it evolve on its own (BUFD bad!) I initially developed it on a separate test project. Once that worked I then implemented it in our project at work somewhat seamlessly.<br /><br />I first created the initial base classes that are the foundation of this pattern and that can be re-used on any project. Below are their interfaces:<br /><pre style="overflow: auto; width: 400px; height: 150px;"><br /><span style="color: rgb(0, 0, 224);font-size:85%;" ><br /><br />/// Specific business rule error that provides a specific message about the broken business rules.<br />public interface IBusinessRuleError<br />{ <br /> /// Gets or sets the name of the property that causes the error.<br /> string PropertyName { get; set; }<br /> <br /> /// Gets or sets the specific error message.<br /> string Message { get; set; } <br />}<br /><br />/// Set of Business Rules used by Domain Objects that captures and stores errors.<br />public interface IBusinessRules<br />{ <br /> /// Gets or sets the business rule errors. <br /> IList<ibusinessruleerror> Errors { get; set; }<br /><br /> /// Gets a value indicating whether this instance has any business rule errors.<br /> bool HasErrors { get;} <br /><br /> /// Determines whether the specified set of business rules contains error. <br /> bool ContainsError(IBusinessRuleError ruleError);<br />}<br /></ibusinessruleerror></span><br /></pre><br /><span style="color: rgb(0, 51, 0);"> </span><div id=":17" class="ArwC7c ckChnd">Basically 'BusinessRules' manages a collection of individual 'BusinessRule'. The business rule contains the error message and it also contains the name of the specific property for the error that will be used later to when mapping it back to a specific UI control.<br /><br />Now I added BusinessRules to the abstract DomainObject class and expose it as a property. Initially it was hard-coded into my domain object but at work I decided to pull it into its own class that could then be instantiated internally and, if need be, injected in as a dependency into the domain object base class (as you know ideal for mock testing it!) Here is what I call the "validator"<br /><br /><pre style="overflow: auto; width: 400px; height: 150px;"><br /><span style="color: rgb(0, 0, 224);font-size:85%;" ><br /><br />public interface IDomainObjectValidator<br />{<br /> /// Runs the validation of each business rule.<br />/// Each derived class can override this method to define its own<br />/// set of validation rules.<br /> void RunValidation();<br /><br /> /// Gets the business rules.<br /> IBusinessRules Rules { get;}<br /><br /> /// Gets a value indicating whether this instance is valid based on whether any business rules failed.<br /> bool IsValid<br /><br /> /// Determines whether [is null or blank] [the specified item to test].<br /> bool IsNullOrBlank(string itemToTest);<br /><br /> /// Fails if condition to test is true.<br /> void FailIf(bool conditionToTest, IBusinessRuleError error);<br /><br /> /// Fails if is null or blank the condition to test is true.<br /> void FailIfNullOrBlank(string itemToTest, IBusinessRuleError error);<br /></span><br /></pre><br /><br />This interface has the RunValidation method whose purpose is to cycle through it each business rule that the derived class is responsible to implement for itself. In addition, the interface also has some basic, re-usable validation tests of these methods such as IsNullOrBlank, FailIf, etc. (courtesy of Fowler) (NOTE: What struck me very quickly was the similarities between these generic methods and with the Asserts of NUnit. It dawned on me when I started to implement a new one that checked the difference for dates such IsBetween(string startDate, string endDate). Mmm...looks a lot like NUnit's Is Constraint model. In fact, 'FailIf' looks like a special case of Assert.That. I'm wondering whether some framework exists out for me to use instead of trying to create and maintain my own.)<br /><br />In turn, the validator's members are delegated and exposed as members of the domain class itself:<br /><pre style="overflow: auto; width: 400px; height: 150px;"><br /><span style="color: rgb(0, 0, 224);font-size:85%;" ><br /><br />// domain object abstract class<br />private readonly IDomainObjectValidator _validator;<br /><br /> public DomainObject()<br /> {<br /> _validator = new DomainObjectValidator();<br /> }<br /><br /> public DomainObject(IDomainObjectValidator validator)<br /> {<br /> _validator = validator;<br /> }<br /><br />public bool IsValid<br /> {<br /> get { return _validator.IsValid; }<br /> }<br /><br /> public IBusinessRules Rules<br /> {<br /> get { return _validator.Rules; }<br /> }<br /><br /> public virtual void RunValidation()<br /> {<br /> _validator.RunValidation();<br /> }<br /><br /> public bool IsNullOrBlank(string itemToTest)<br /> {<br /> return _validator.IsNullOrBlank(itemToTest);<br /> }<br /><br /> public void FailIf(bool conditionToTest, IBusinessRuleError error)<br /> {<br /> _validator.FailIf(conditionToTest, error);<br /> }<br /><br /> public void FailIfNullOrBlank(string itemToTest, IBusinessRuleError error)<br /> {<br /> _validator.FailIfNullOrBlank(itemToTest, error);<br /> }<br /></span><br /></pre><br /><br />Once that was done then it was time to actually use it for a specific domain object. So I have a domain object named 'Question' that makes up a 'Quiz':<br /><br /><pre style="overflow: auto; width: 400px; height: 150px;"><br /><span style="color: rgb(0, 0, 224);font-size:85%;" ><br />public interface IQuestion<br />{<br /> /// The text of the question itself.<br /> /// For example, "How old are you?"<br /> string Description { get; set; }<br /><br /> /// Point value of the question if quiz taker gets it correct.<br /> int MaxPointValue { get; set; }<br /><br /> /// Sequence # of the question within a quiz.<br /> int SequenceNumber { get; set; }<br /><br />// Bunch of other members...<br /><br />}<br /></span><br /></pre><br /><span style="color: rgb(0, 0, 0);">So in the actual Question class, I override and implement the 'RunValidation' method with "rules/errors" specific to 'Question': </span><br /><pre style="overflow: auto; width: 400px; height: 150px;"><br /><span style="color: rgb(0, 0, 224);font-size:85%;" ><br />// Question class<br />public override void RunValidation()<br /> {<br /> // validation # 1<br /> FailIfNullOrBlank(_description, new BusinessRuleError("Description", "Question description must contain a value."));<br /><br /> // validation # 2<br /> if (_description != null)<br /> {<br /> FailIf(_description.Length > 10,<br /> new BusinessRuleError("Description", "Question description can not be longer than 10 characters."));<br /> }<br /><br /> // validation # 3<br /> FailIf(_maxPointValue > 100, new BusinessRuleError("MaxPointValue", "Maximum Point Value can not exceed 100."));<br /><br /> // ....<br /> // validation # 100...<br /> }<br /></span><br /></pre><br /><br />So this is where it all happens. Basically, this is where all the business rules that require validation for 'Question' is kept and maintained. Not in the UI, not in the presenter, not in the database or not anywhere else. Right where it should be. What's great is how nice is it to itemize and view all of your business rules in one place. The best part is unit testing this (which you really can't do well at all if it's in the presenter). Here are one of the tests:<br /><br /><pre style="overflow: auto; width: 400px; height: 150px;"><br /><span style="color: rgb(0, 0, 224);font-size:85%;" ><br />// Question test fixture<br /><br />[Test][Category("Data Validation")]<br /> public void DoesContainBrokenRuleWhenDescriptionIsNull()<br /> {<br /> <br /> Question question = new Question();<br /> question.Description = null;<br /> question.RunValidation();<br /><br /> IBusinessRuleError descriptionError = new BusinessRuleError("Description", "The description for this 'Question'<br /><br />must contain a value.");<br /> Assert.That(question.Rules.ContainsError(descriptionError), "Does not contain Description error.");<br /> Assert.That(question.IsValid, Is.False, "Question is valid.");<br /> }<br /><br /></span><br /></pre><br /><br />How cool is that? I especially like the clarity of this code line:<br /><br /><span style="color: rgb(0, 51, 0);"> question.Rules.ContainsError(</span><wbr style="color: rgb(0, 51, 0);"><span style="color: rgb(0, 51, 0);">descriptionError)</span><br /><br />Here are a few more tests:<br /><br /><pre style="overflow: auto; width: 400px; height: 150px;"><br /><span style="color: rgb(0, 0, 224);font-size:85%;" ><br /><br /> [Test][Category("Data Validation")]<br /> public void DoesContainBrokenRuleWhenDescriptionLengthGreaterThan10()<br /> {<br /> Question question = new Question();<br /><br /> question.Description = "1234567891011";<br /> question.RunValidation();<br /><br /> BusinessRuleError descriptionError = new BusinessRuleError("Description", "The description for this 'Question' can not be longer than 10 characters.");<br /> Assert.That(question.Rules.ContainsError(descriptionError), "Does not contain Description error.");<br /> Assert.That(question.IsValid, Is.False, "Question is valid.");<br /> }<br /><br /> [Test][Category("Data Validation")]<br /> public void DoesContainBrokenRuleWhenMaxPointValueExceeds100()<br /> {<br /> Question question = new Question();<br /><br /> question.MaxPointValue = 101;<br /> question.RunValidation();<br /><br /> BusinessRuleError maxPointValueError = new BusinessRuleError("MaxPointValue", "Maximum Point Value for this 'Question' can not exceed 100.");<br /> Assert.That(question.Rules.ContainsError(maxPointValueError), "Does not contain MaxPointValue error.");<br /> Assert.That(question.IsValid, Is.False, "Question is valid.");<br /> }<br /></span><br /></pre><br /><br />By implementing this at work the app's Domain model is now slightly less anemic. However, the auto-gen of partial classes presented an issue that I was not too happy with. The MyGeneration template is currently set up to read from the database the constraints of the columns and then it's hard-coded directly into the property setters (which includes throwing exceptions). This forces the trapping of the validation error to occur OUTSIDE of the domain object which goes against this implementation of the pattern. So unless I modify the template to remove this from the setters (or at least move into some private method) I had to circumvent updating via the setters and use some methods as so:<br /><br /><span style="color: rgb(0, 51, 0);"> question.Description = "My Description";</span> <span style="color: rgb(0, 51, 0);"> question.MaxPointValue = 101;</span><br /><br />becomes using overloads<br /><br /><span style="color: rgb(0, 51, 0);"> question.</span><wbr style="color: rgb(0, 51, 0);"><span style="color: rgb(0, 51, 0);">UpdateDescriptionUsingValidati</span><wbr style="color: rgb(0, 51, 0);"><span style="color: rgb(0, 51, 0);">on("1234567891011")</span> <span style="color: rgb(0, 51, 0);"> question.</span><wbr style="color: rgb(0, 51, 0);"><span style="color: rgb(0, 51, 0);">UpdateMaxPointValueUsingValida</span><wbr style="color: rgb(0, 51, 0);"><span style="color: rgb(0, 51, 0);">tion (101)</span><br /><br />and/or<br /><br /><span style="color: rgb(0, 51, 0);"> question.</span><wbr style="color: rgb(0, 51, 0);"><span style="color: rgb(0, 51, 0);">UpdateUsingValidation("</span><wbr style="color: rgb(0, 51, 0);"><span style="color: rgb(0, 51, 0);">1234567891011", 101)</span><br /><br />Not really what I wanted but it works for now until I can resolve that auto-gen issue (another reason why auto-gen can sometimes be an anti-pattern)<br /><br />So let's see the entity 'Question' actually used in a Controller/Presenter context:<br /><br /><pre style="overflow: auto; width: 400px; height: 150px;"><br /><span style="color: rgb(0, 0, 224);font-size:85%;" ><br /><br />// Presenter class<br />public void SaveChanges()<br /> {<br /> IQuestion question = new Question();<br /> question.Description = _view.Description;<br /> question.MaxPointValue = _view.MaxValuePoint;<br /> question.SequenceNumber = _view.SequenceNumber;<br /> <br /> question.RunValidation();<br /> if (question.IsValid)<br /> {<br /> _dao.SaveOrUpdate(question);<br /> _view.DisplaySuccess("The question has now been saved.");<br /> }<br /> else<br /> {<br /> _view.DisplayErrors(question.Rules.Errors);<br /> }<br /> <br /> }<br /></span><br /></pre><br /><br />Now how does that compare with one of the original LONG save methods? The intent, readability, and therefore maintainability is light years better. (NOTE: As a side note, I had to use the NHibernate's ISession.Evict() to prevent the entity from being persisted to the db.)<br /><br />OK, finally the UI/View/Code-Behind<br /><br /><pre style="overflow: auto; width: 400px; height: 150px;"><br /><span style="color: rgb(0, 0, 224);font-size:85%;" ><br />// View class<br />public void DisplayErrors(IList<ibusinessruleerror> errors)<br />{<br /> foreach (IBusinessRuleError error in errors)<br /> {<br /> if (error.PropertyName.Equals("Description"))<br /> {<br /> _ctlDescriptionValidator.ErrorMessage = error.Message ; <br /> _ctlDescriptionValidator.IsValid = false;<br /> }<br /><br /> if (error.PropertyName.Equals("MaxPointValue"))<br /> {<br /> _ctlMaxPointValidator.ErrorMessage = error.Message; <br /> _ctlMaxPointValidator.IsValid = false;<br /> }<br /> }<br />}<br /></ibusinessruleerror></span><br /></pre><br /><br />The controls '_ctlDescriptionValidator' and '_ctlMaxPointValidator' are <a href="http://asp.net/" target="_blank">ASP.NET</a> custom validators that are now really dumbed down. I also used the <a href="http://asp.net/" target="_blank"> asp.net</a> 'ValidationSummary' control on the web page without needing to do hardly any wiring up. Here is some of the related HTML:<br /><br /><pre style="overflow: auto; width: 400px; height: 150px;"><br /><span style="color: rgb(0, 0, 224);font-size:85%;" ><br /><form id="form1" runat="server"><br /> <asp:ValidationSummary ID="_ctlValidationSummary" runat="server" /><br /><br /> <asp:Label ID="_lblSuccessMessage" runat="server"></asp:Label><div><br />Description<br /><asp:TextBox ID="_txtDescription" runat="server" ><br /></asp:TextBox><br /> <asp:CustomValidator ID="_ctlDescriptionValidator" runat="server" ControlToValidate="_txtDescription"<br /> ErrorMessage="" OnServerValidate="_ctlDescriptionValidator_ServerValidate">*</asp:CustomValidator><br /><br />Max Point Value<br /><asp:TextBox ID="_txtMaxPointValue" runat="server" ><br /></asp:TextBox><br /><asp:CustomValidator ID="_ctlMaxPointValidator" runat="server" ControlToValidate="_txtMaxPointValue"<br /> ErrorMessage="" >*</asp:CustomValidator><br /></span><br /></pre><br /><br />All in all it does not matter if I use the <a href="http://asp.net/" target="_blank">asp.net</a> validators, my own custom message controls, or whatever. The data validation is not tightly coupled with the UI by using the deadly combo of MVP and the notification pattern!!!<br /><br />I'm certain that aspects of my implementation can be improved and/or extended in some fashion. There are some things I debated as to which is the best approach but I can go into more detail later (for example, I mulled over a couple of other ways on how to pass the messages to the View but settled on the one above. Another was possiblly using reflection to set the property names in the error messages...but like I said I wanted to keep it simple for now. )<br /><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8416807644832025349.post-57119571280206425052008-07-21T22:58:00.000-07:002010-01-11T23:59:45.826-08:00Catching upNow that <a href="http://lexicalclosures.blogspot.com/2008/07/print-hello-world.html">I have a blog</a>, I thought I "reprint" every once and awhile some things I have written from my "pre-blog" years that might still be relevant or even mildly interesting to read again.Unknownnoreply@blogger.com0