Posts

Do Code Reviews Work?

Image
I've noticed a very disturbing trend that continues to repeat whenever the topic and task of code reviews come up. So much (too much?) attention is paid to the "rules" of combat, everything from the proper etiquette of code reviews to the agreed upon coding standards de jour. Too little  attention, to my mind, is paid to the skill set required to actually perform a proper code review. Let me illustrate with an example. Client 'X' goes through great pains and kerfuffle to define a coding standard. Curly braces go here (hence the name of this blog). Variable are names this way. To comment or not to comment, that is the question, and so on, and so forth. I submit some code for a pull request. I get back... line noise. Kind of like the automated resume readers that scan for keywords. Rule matching, except done poorly by humans. Did you actually look at the code, or at the Git Diff tool?  Look at the code? Are you out of your mind? I'm busy, I have real  work to do

Debugging isn't magic! Patience and common sense goes a long way

Image
When you work for as long as I have and on as many projects and software problems as I have encountered, it stands to reason that you will tend to end up working with other bright engineers. I have had the pleasure of working with developers on a great many applications that I have not only been proud to attach my name to, but that I am pleased to say are still out there in the wild working as expected, performing as designed, and making clients quite happy in the process. When it comes to troubleshooting systems... I am sometimes really surprised where things can take a turn. I have often heard this described as a special skill set, something that some are better adept at than others. The more cynical folks I have met will ascribe it to a darker missive, one where developers are either lazy, incompetent, or simply blame the first thing they don't understand. I don't believe its either some wizard's skill attained climbing the magic mountain, nor will I trash those w

Representing C/C++ unions and bitfields in C#

Image
You are a seasoned C++ applications or embedded programmer, and you need to access an integer bitfield as a set of specific bits. You know how to do this: union   myUnion { unsigned   int   info ; struct { unsigned   int   flag1   :   1 ;   // bit 0 unsigned   int   flag2   :   1 ;   // bit 1 unsigned   int   flag3   :   1 ;   // bit 2 unsigned   int   flag4   :   1 ;   // bit 3 unsigned   int   flag5   :   1 ;   // bit 4 unsigned   int   flag6   :   1 ;   // bit 5 // . // . // . unsigned   int   flag31   :   1 ;   // bit 31 }; }; Supposing, however, you are a C#/.NET programmer. What do you do then? There is no provision or direct support in the language to do this. What tools do you have at your disposal? Well, you do have: [ StructLayout ( LayoutKind . Explicit, Size  =   1 , CharSet  =   CharSet . Ansi)] That will give you the ability to control the byte packing. You also have: [ FieldOffset ( x )]             Where: x

Where it all began... or, "in my day, we didn't have '1's and '0's... we only had '1's"

Image
Although my career spans over 35 years, I've been programming and writing code for close to 40 years now. That notion wasn't lost on me when the other day I was talking to other "grizzled" engineers about how the more things change, the more they stay the same. In truth though, software development and engineering is just one side of a craft I fell in love with really since I can remember. Hardware and electronics is where I got started. I took apart every thing I could find just to see how it worked. I used to spend hours in my parents basement reading all of my dad's science books. He used to bring home old worn out copies of Popular Mechanics, Popular Electronics, and the like. If it had wires or gears or required some modicum of technical knowledge, I was all over it. The first computer I ever wrote software for was an Apple ][. To a 14 year old, entering commands in in Apple Integer BASIC and displaying color sprites was nothing short of miraculous. Du

Dispatch? I'd like to make a call. The difference between Synchronization Context, Task Scheduler and Dispatcher

Image
I recently had to deal with cleaning up  Dispatcher   based UI update code in a WPF application. I am struck after so many years that like garbage collection (think  IDisposable ) and threading, there still does not exist a clear understanding or explanation of how to marshal across threads, the benefits of using constructs like the  SynchronizationContext  to do so, and what problem(s) it solves. I thought it might be instructive to demonstrate the various techniques available to developers using the WPF framework for illustrative purposes. Keep in mind that the concepts outlined here apply not just in WPF, but anywhere where business logic code meets UI presentation. Given a  Dispatcher  object and the  SynchronizationContext  object, which one should you choose, and what are the compelling reasons for doing so? Note: This post was inspired from an answer I posted on  StackOverflow  to this very question. Perhaps it would help to explain what problem the  SynchronizationCo

Implementing the try/catch/finally pattern in C++ using lambdas, or why cant this be as easy as it is in C#?

Image
My very first programming 'language' was Z-80 assembler, even before I had learned to program in interpretive BASIC. So you would think I would be used to doing things the hard way, used to pain... =P When I first saw the C language, I fell in love right away. I already had several CPU dialects under my belt, and the idea of pointers was already an easy notion to grasp -- I had already been using VARPTR in BASIC. The idea of a language that was graceful, provided native pointers, language constructs, NO LINE NUMBERS, who could ask for more? Then came C++. Well, not exactly. My first exposure to C++ was via the Glockenspiel "compiler", which was really a glorified C preprocessor. No matter, eventually Borland and Microsoft released C++ compilers, and I never looked back. Until around 2000, when Microsoft released .NET and C#. By this time, we had COM/DCOM, the STL, and boost. Writing custom allocators, IUnknown for custom COM objects, RAII patterns

Mutexes, events and threads, oh my! Synchronization using a task based asynchronous approach

Image
One of the more difficult problems when trying to synchronize across threads and processes on Windows is not only choosing the right kernel object, but also constraining the lifetime and signalling context of said same object(s). Lets suppose we have an GUI application (which provides us with your typical potpourri of threading and synchronization context issues), and a console monitoring application with which we want to communicate some event. Lets suppose further that our GUI application will be running many background threads, any of which will actually be the thread which will need to signal the monitoring console process. It would look something like this: Immediately several issues arise: 1. The mutex object would have to be created on a separate thread in the process so as not to block the main foreground thread (more on this later). 2. The mutex object must  be signaled on the same thread that was used to create it! There are two approaches that can be taken to