Thursday, November 08, 2007

Problems with source-control and ItemGroup entries in .CSPROJ files

Here's an interesting one that just happened to me.

I work in a ClearCase source control environment and recently had to have access to a VOB from another team so that I could do a bit of trouble shooting.

The problem arose as soon as I tried to compile their code; the source-control complained that the xxx.csproj couldn't be written because it was write protected. This was normal because I hadn't touched anything or made any checkouts.

The problem was that the system seemed to want to overwrite the files that were not checked out.

After much detective work I found that the XML:





was being inserted into the IN_MEMORY copies of the CSPROJ files.

After some detective work in the registry I found that this was associated with something called STextTemplating which has to do with the template system in Visual Studio and seems also to be associated with Domain Specific Languages (DSL)

After more searching I found that it was necessary to remove the following keys:

{B4F97281-0DBD-4835-9ED8-7DFB966E87FF}
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\8.0\Packages\{a9696de6-e209-414d-bbec-a0506fb0e924}
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\8.0Exp\Packages\{a9696de6-e209-414d-bbec-a0506fb0e924}

After this, the problem went away.

Here is the link I found after about two hours of digging and pain...

http://www.devnewsgroups.net/group/microsoft.public.dotnet.framework/topic61709.aspx

Thursday, May 03, 2007

2.NET|!2.NET

Arthur C. Clarke once said "Any sufficiently advanced technology is indistinguishable from magic"

A few times recently, I've been in a situation where I just wanted to hold my head in my hands and groan "Noooo not again!! make it stop!!". Why? Well, because even after seven years of .NET people still don't get the idea of what advantages there are in fully embracing the framework's philosophies to create really great applications or components.


Imagine dumping a medieval man into the middle of London. Aside from the obvious initial shock, he'd become used to the idea that cars travelled about without horses attached to them, he could even learn to drive a car. He would become used to the idea of lighting without flames and he'd eventually become a consumer of fish and chips or MacDonald's burgers without necessarily understanding how it all happened or what made it all tick. After his short period of adjustment that went from screaming fits and soiling himself with fear, we would eventually get someone who can function just fine in the environment. Whether he could design a car or grasp the concepts of electricity would be however, a different subject entirely.


This is the sort of thing that happens when someone who is "expert" in 1990's technology gets hold of the concepts of third millennium .NET framework. They become a perfectly adequate programmer, able to apply their concepts of programming to C# or VB.NET and as long as their efforts are contained within a single sealed up application all is well and you can't tell the difference from the outside.


What happens however when that person, however innately intelligent they may be, applies 1990's ideas to .NET architectures and has the responsibility for creating, say, a huge data management framework of industrial proportions? You guessed it. A complete and total disaster that does nothing but make people groan with disbelief.


For me, the best aspects of .NET architecture are the ones that don't fall readily to mind, even if you're a world-renowned C++ guru and have 20 years of experience in your field. For example, the idea that your objects may take part in a design time environment. This was not even a possibility in the old world of C++ but now you should seriously consider whether your objects should at least carry the attributes such as Browsable, Description, Category etc. Furthermore, you should ensure that your object has a type-conversion strategy, should implement ToString correctly, provides a design-time editor, possibly a graphic editor, certainly a smart-tag, some designer verbs and so-on.

When designing an architecture today, we also need to look to current trends in data-binding. It used to be that tying an object to a GUI was a laborious process that required either brute-force, the preferred C++ method, or implementation of some pattern such as Model View Controller (MVC). Any remotely skilled Windows Forms engineer will immediately use data binding which neatly sidesteps these issues. However, the objects in question need to either provide changed events for properties or implement INotifyPropertyChanged. So, I hear everyone reading this beginning to say "Well, if the object is some deep part of a framework and not exposed to the GUI, why bother?" The answer here of course is that data-binding is no longer a GUI only issue. .NET 3.0 and 3.5 already has data binding that can take place between any two properties so that otherwise invisible objects can be bound. This is not only interesting for WPF maniacs but for anyone who has an object that receives input from another.

Finally, meta data and reflection coupled with new type description systems are so powerful that it adds vast new aspects to object orientation that are so outside the realms of the classic Encapsulation, Inheritance, Polymorphism triad that classes have ceased to become immutable definitions and have entered the realms of chimeric virtual objects that appear to be one thing when they are, in reality, something totally different.

If you're an architect and expecting to create a framework for your flagship product by all means pick .NET. Just don't go blundering about like a medieval peasant when you could be using magic instead.

Saturday, February 10, 2007

A strong business case for WPF?

Hot issues in the computing universe include the advent of Vista and the .NET framework 3.0 including technologies such as Windows Presentation Framework (WPF). Even the most disconnected developer will be aware of the hyped images of videos playing on the faces of spinning cubes and photo-albums that float in a plasma-filled void. All very pretty, amazingly attractive to geeks who understand what it takes to do such a thing, but how do you explain the need for early adoption of such a technology to someone who doesn’t see the need for these sorts of things in their application?


Recently I’ve been working in a company where I’m responsible for the GUI end of a complex data architecture and I am faced with diverse problems on a daily basis. Firstly, the company is a large institution with a phenomenal IT overhead. Everything must be certified; everything must be accepted and verified. As a result it’s only recently that the company moved onto Windows XP systems and even more recently that Service Pack 2 has been accepted and even that is installed on a very few machines. Imagine then how difficult it might be to persuade the people in the system to adopt such a new technology as WPF. When we see WPF we see video games, we see spinning cubes, we see applications that are so much fluff and that have little or no use whatsoever for a company that just needs to display values from a database in a grid. A project manager who sees such demonstrations will dismiss them out of hand, indeed, they have.


However, the managers are very hot on the subject of performance. The data architecture I work with has the possibility to swallow and process large amounts of real-time data. This data comes from complex calculation systems that can change a couple of hundred lines on a data grid in a matter of moments. Using conventional systems, the best commercial grid software we can find and every optimisation technique available to us we still see processor consumption rates climbing to the 100 percent mark on simple machines and the 50 percent mark - this is to say one processor totally consumed - on multi-core systems. Why is this? Simply, because instead of using the processor for doing work, we’re using it to paint the constantly changing cells in a grid. Our graphics and our UI are destroying our ability to do work.


Where does this leave us? In order to liberate the processor we need to stop it from drawing the graphics. We still need the graphics so this implies that they need to be managed somewhere else. By the graphics card itself possibly.


Strangely WPF is here with a system that can make even the most bogged-down two dimensional application fly. With the power of graphics processors on even simple display cards today, the rendering of a grid can become a trivial matter, even when it’s data-bound to a constantly changing stream of data.


Forget spinning cubes, forget cards that bounce and shatter in a waterfall of broken shards, forget plasma fields with smiling babies and dogs catching Frisbees. Show the managers in your company the benefits of freeing up those expensive processors for doing real work and leave the graphics where they belong, on the video card.


Sunday, January 28, 2007

What is the most important technology today?

With the recent release of .NET 3, Vista, WPF and the WCF systems one might think that the choice was large. However, from everything I've seen recently the most impressive and important technology of today isn't what one might imagine.

I think that there is only one truly outstanding thing in all of todays new technologies and in fact it's probably one of the most fundamental concepts because, after all, what we do as programmers is normally to enable our users to visualise otherwise cryptic data.

The technology of which I speak is, of course, data-binding.

As an architect of systems that manipulate otherwise boring and complex data, I've found that the method of databinding used can make the difference between a mediochre application and a truly fantastic one.

Data binding techniques are in the process of evolving. Look at the ideas in .NET 3.0 and you'll see that not only can the properties of data objects be bound properties of user-interface objects but just about any property may be bound to any other.

The binding mechanism is also becoming more of a presentation mechanism with the ability to add inline bi-directional conversions to the data so that raw data in a property may be treated in some way before it gets to the user interface.

These ideas may seem to be spurious at first glance. Who needs to bother how data binding works on a deep level? The answer may surprise more than you imagined. Why? Consider this.

For many years, since the early days of MFC and since the work of The Gang Of Four, design patterns have been talked about in the industry and many attempts have been made to implement them correctly. The Document-View model in MFC, the implementation of patterns such as MVC (Model, View, Controller) and more recently MVP (Model, View, Presenter) as used in the CAB (Composite Application Block) have all been the subject of various implentations attempts that were more or less successful.

My own experience with these classic patterns is that the ideas are usually great, the pattern has a logic and a simplicity that suggests that the implementation should render good results but the implementation, when left open to developers who use classic quick-and-dirty methods or just don't understand the architecture, usually falls short of expectations. One of the biggest problems I've seen in many applications is the blurring of the line between the boundaries of the pattern components. The way that we build applications, such as an MFC, Windows Forms or even WPF application, leave so much open to interpretation that developers will often write complex code into a form, dialog or user-control so that the functionality of what should be only, say the view, becomes part view, part controller, part model.

How does databinding solve this problem of interpretation of a pattern and the consolidation of an architecture? Well, the way we use databinding enables us to remove all aspects of business logic from the view and to move the intelligence of the application into the presentation portion where it belongs.

I'm in the process of preparing a number of articles on this subject which I will be posting over the next few weeks. If you're interested in the follow up to these ideas, watch this space.

Monday, July 31, 2006

Sandcastle

I am a documentation freak. I do /// style docs by habit whenever I code and until recently I've used NDoc to generate the final output.


Microsoft have been amzingly quiet about documentation. The old doc generator from visual studio sucked and NDoc seemed to have been its far-superior successor so I was waiting eagerly for a version of NDoc that would do generics and all that good stuff. Sadly, I recently read a post that said that the developer of NDoc was giving up because it was obvious that although it was far superior, NDoc couldn't compete against Microsoft's new Sancastle doc generator.


There are two things here that are a real shame. First, NDoc rocks yet the guy never recieved anything like the amount of support he should have. I mean, a 5 buck donation on paypal from everyone who used NDoc would have allowed him to work on the project full time and finish up NDoc 2.0. Did this happen Not on your nellie.

Secondly, not wishing to belittle the Sandcastle effort in anyway but given that Microsoft has a huge fund of cash available, why didn't they just buy NDoc and integrate it into Visual Studio? I guess that would have been too simple.


Anyway, the Sandcastle CTP is out...

Monday, July 24, 2006

How much does a meeting cost?

I've been working for the last few months for a huge company who shall be nameless.



Managers in this company like to call team meetings so that the team can bring the manager up-to-speed on what's going on and how progress is. The typical team meeting will go on for an hour and a half and there will be between nine and twelve people sat around the conference table basically reiterating stuff that could have been said in e-mail in under five minutes.



The main problem is that the managers themselves are non-technical and have more responsibility for administrative tasks than for getting the product out the door. This company prices all its work in man-days and a manager who called todays meeting was recently heard to say that the team had spent 150 man-days this year on project X and "nothing had been done"



Well, apart from the fact that the whole team has moved from C++ to C#, had courses in Windows Forms, changed the development practices from "useless hierarchical" to XP / Scrum and defined a .net application architecture, each person on the team spends more than four hours a week in meetings.



Half a day per person per week means 8 lost man-days per week. On a six month project, this means 200 man-days lost out of a budget of 2000. This also means paying a developer to sit and do nothing but scribble on a jotter for two-thirds of a year.

Friday, July 15, 2005

Speed freaks...

I've been working for a client who had a Win32 based application that did some graphics creation tasks for a consumer product. They wanted to update the look and feel of the program and hired someone to create a new application for them. This engineer did a pretty good job but, through lack of personal funds and the availability of a freebie, wrote the application using the C# express beta for C#2.0.

The customer took delivery of the partial code and then discovered that to release it to their customers they'd have to back-port it to .NET version 1.1 and finish up the program. Because of the highly graphical nature of the application and because the original implementation was rather slow they asked me to pep it up a bit.

After solving many of the drawing issues and getting the application up and running we got to the stage where people were installing it to test and everyone who saw it was horrified at the startup time. Some machines saw a 35 second wait between the initial first click and the appearance of the application window. They had never seen a fat .NET app before.

There were two problems with the whole scenario. First, the cycle was to install a new copy of the app, to click the application icon and sit there with a stopwatch waiting for something to appear. This meant that every time they saw it it was going through the process of creating the native image copy for the application cache. Secondly, even though the application had a splash screen this was a part of the application that needed JIT compiling too so often it didn't appear for some time. This encouraged the user to click a second time.

To solve these problems I've tried two things. To make the splash screen appear instantly I wrote a small program launcher that just displayed the desired image and then used Process.Start to launch the real program. This gives the user the feeling that something is happening and discourages them from clicking the icon again and again. Secondly I wrote a custom installer clas that used ngen, the .NET framework native image generator, to create the native-code compiled version of the program at install time. This meant that the first run of the code already had the native image in the cache and reduced load times to the barest minimum. On a test machine I got the load time from 30 seconds down to 6 seconds total using this method.

Sunday, June 19, 2005

Duality of purpose.

I find myself embarrased at times when answering questions in the various forums I hang out in. On occasion, I find myself with no other answer than "Buy the tool I created to do that".

This sometimes seems to be mercenary and is not well recieved by the users of the forum. Last year I practically gave up offering advice in a VB forum because the people who frequented it thought that the only reason I was there was to tout my wares.

To be honest, I always hesitate to answer "if you're interested in a commercial solution..." and think more than twice about posting a link to my business site but recently I've begun to wonder why people are so anti a good solution.

Looking at the situation logically, the kinds of questions I answer with a commercial offering are the ones that would start out with "Well, you need to build yourself a small nuclear power station. First mine some uranium and then..." followed by a couple of pages of complex overview. These are not trivial questions then and hence have no trivial answer.

What I really don't understand is when someone pops up on line and demands a FREE-SOURCE-CODE-INCLUDED tool that does just exactly what they want and moans like the clappers if someone suggests they should pay a few bucks registration fee or even buy a developer license

Work is money and money is work. If someone is faced with a weeks work figuring out how to do a specific task and can get the answer for a thirty buck registration fee then how can they be so indignant when someone says "Ok, I'll save you a weeks worth of finding out how to do this yourself in exchange for the cost of one hour of your precious time.

If it were me, and it frequently has been, I'd jump at the chance and register there and then. Then again, like my dad says "Theres nought so strange as folk"

Wednesday, April 20, 2005

GPS Software

I recently purchased a GPS system for my car. The hardware is a Windows CE device running dedicated software from the German Navigon company. This little box of tricks will plan routes for you and speaks instructions such as "At the next junction, turn right" or "Leave the roundabout at the third exit".

In itself, the machine is an excellent companion to anyone who needs door-to-door driving instructions with intelligent route planning and re-planning if you, despite all the instructions, take a wrong turn or end up on a detour that closes the roads the machine thinks you should be taking.

What makes this, in my opinion, the greatest thing since sliced bread is Navigon's customer support service. I had a problem with my machine that was my own fault and contacted Navigon to see if I could fix the problem. I fully expected to have to pay for the fix but Navigon exchanged a few e-mails and have fixed me up like new.

So many companies these days make you jump through hoops and do as much as possible to place the blame for the fault on the purchaser to save money on support. Navigon's excellent customer support policies were a real breath of fresh-air.

Thursday, April 14, 2005

More printer ink thoughts


After printing a 1000 page PDF file I decided that the printer ink situation should be resolved once and for all.



I wrote this article to show how to refil Epson TO452, TO453, TO454 and TO441 cartridges.


Friday, March 25, 2005

Serif fonts.

I have developed an aversion to serif fonts such as Times New Roman. For preference I use Verdana which has subtle differences from Arial or MS Sans-Serif and which I think is a far more readable typeface.
Arial is a slightly cramped typeface that has connotations of a serif font particularly because of the lower case "a" which I find to stand out almost as though it was purposefully created in a different style from the rest of the font.
Verdana has an open style which is far better proportioned and the "a" fits right in with the rest of the font as if it belongs.
Times is a very cramped typeface, this paragraph is the same point size as the Verdana above but you must agree is not nearly as easy on the eye. It reminds me of cheap paperback books and seems to lower the quality of the text. I know it's almost a 'traditional' font but I do wish that it wasn't the default for everything.
Georgia is Verdana's serif cousin and is more open and readable so as a compromise I tend to use it when serif fonts are required.
For me, Verdana is the queen of fonts.

Unsafe isn't

I've noticed that many programmers seem to have an irrational fear of the "unsafe" keyword in C#. It's almost as though they are afraid that if they use it in their code then program will turn into a Frankenstiens Monster and leap opon them in their sleep.
Unsafe refers only to the fact that the code might reference unmanaged and hence non type-safe code. Not that one is taking a calculated risk by using it.
Personally, I only use unsafe where the highest performance is required and I don't want anything to intervene between my code and the bytes it's working on. In most of my puplicly available examples these days I use the Marshal class because it enables me to create code in C# which is readily translated to VB. This is of course just my lazy nature and not a fear of the evil that may befall me if I use that dreaded keyword.

Monday, March 14, 2005

Turn the world on it's head!

Today I have had a bit of a shock. In response to a newsgroup posting I decided to update my article on generating 1 bit per pixel images from 24bpp colour. The original article used a mixture of Bitmap.GetPixel and LockBits to determine the pixel brightness and write the single bit pixel to the image array and I had decided to convert this to a LockBits solution on both sides of the equation for the sake of completeness.
I re-wrote the C# code so that instead of using unsafe pointers it used the Marshal class to read and write the bytes. This made the code identical in function to the VB conversion. After testing the C# code I did a quick conversion to VB and wran the two applications on the same image.
I noticed that the VB application seemed to be faster so I added a diagnostic routing to time how long the central part of the loop, the one that actually does the conversion, took. To make sure the code had a good chunk of data to work with I used an image that was 4000*3200 pixels.
To my great surprise the VB code is consistently faster by over three seconds with the C# doing the loop in around eight seconds and the VB running the functionally identical code in only five.
I have seen instances before where the VB compiler was demonstrably better at generating code than the C# one but have never seen it so clearly shown to be superior at simple tasks than the C# compiler.
The timed code is shown in both C# and VB here:
C#
DateTime dt=DateTime.Now;

//scan through the pixels Y by X
int x,y;
for(y=0;y {
for(x=0;x {
//generate the address of the colour pixel
int index=y*bmdo.Stride+(x*4);
//check its brightness
if(Color.FromArgb(Marshal.ReadByte(bmdo.Scan0,index+2),
Marshal.ReadByte(bmdo.Scan0,index+1),
Marshal.ReadByte(bmdo.Scan0,index)).GetBrightness()>0.5f)
this.SetIndexedPixel(x,y,bmdn,true); //set it if its bright.
}
}

//tidy up
bm.UnlockBits(bmdn);
img.UnlockBits(bmdo);

//show the time taken to do the conversion
TimeSpan ts=dt-DateTime.Now;
VB
'for diagnostics
Dim dt As DateTime = DateTime.Now

'scan through the pixels Y by X
Dim y As Integer
For y = 0 To img.Height - 1
Dim x As Integer
For x = 0 To img.Width - 1
'generate the address of the colour pixel
Dim index As Integer = y * bmdo.Stride + x * 4
'check its brightness
If Color.FromArgb(Marshal.ReadByte(bmdo.Scan0, index + 2), Marshal.ReadByte(bmdo.Scan0, index + 1), Marshal.ReadByte(bmdo.Scan0, index)).GetBrightness() > 0.5F Then
Me.SetIndexedPixel(x, y, bmdn, True) 'set it if its bright.
End If
Next x
Next y
'tidy up
bm.UnlockBits(bmdn)
img.UnlockBits(bmdo)

'show the time taken to do the conversion
Dim ts As TimeSpan = dt.Subtract(DateTime.Now)
I'll have to compare the IL for the two compiled sections to see where the C# compiler fails to get that extra few ergs.

Friday, March 11, 2005

Does there have to be a reason?

I am a proponent of wordy explanations but there are some times when a simple and definitive answer should be taken at face value and not questioned.

Children often ask "Daddy, what would happen if I were to poke my tongue in the electrical outlet?" wherupon daddy will reply "You'll die a horrible and painful death son". This should be enough of an answer and should be heeded by all small boys.

When a programmer asks "Can I store the Graphics object?" the answer is "No!". This is one of those definitive answers that is usually questioned but which always ends up being "No!" however many times or in however many different ways it's asked.

Just to recap....

Do not store the Graphics object. Do not store it in a local variable, a static variable, a shared variable, an object, a structure or a database. Do not put it in an envelope and post it to your aunt, do not draw a picture of it, do not photocopy it, do not have Monet make a painting of it.
Do not put it in your pocket for later, do not hash encode it and e-mail it to the CIA, do not put it on a Post-it note and stick it to your screen, do not grind it into powder and sniff it up your nose with the other stuff you just had. In short....

DO NOT STORE THE GRAPHICS OBJECT!

Saturday, February 12, 2005

Falling in love again.

I have always been a "client side" kinda guy. Most of the work I do is fairly heavy graphics code and I'm reasonably good at problem solving when I can sit down in front of a good algorithm and bash out some code.
I am forced through circumstance to do work in HTML and I generally use FrontPage for my web site work but I can honestly say that HTML and presentation stuff of that nature is BORING BORING BORING so I tend not to put as much effort into it as I normally do my other work. I find that this reluctance is also affected by the fact that it's so dammned complicated to get a web site to do anything at-all interesting and the mix of script and the hokey way it fits into HTML coupled with the truly abysmal debugging options one has for working with it means that I do the minimum possible to make my sites work.
Up until recently, I had hosted my sites on cheap servers that didn't offer ASP.NET services so I was stuck with whatever active stuff I could be bothered to learn in the JavaScript fashion. About a year ago however I moved BobPowell.NET to a server at Brinkster because they had much more bandwidth than the godaddy servers I had been on before.
Now, I'm a great believer in "If it ain't broke, don't fix it!" so I have used the brinkster service as a high bandwidth home for my old site and not made that many changes to it. Within the last few weeks however I have got back into using ASP.NET beause I'm writing a complete licensing, support and customer management package for XRay tools. Of course, I knew how this all worked from a theoreical standpoint and had done lots of sample stuff on ASP.NET but never sat down to write an end-to-end application in it.
Well folks, I'm in love. What an excellent way to create an application. It has all the visual afdvantages of HTML with all the algorithmic advantages of client-side work. I can write my web-site in the same way that I'd write a Windows Forms application and it works so well!
I'll definitely be putting LOTS more active stuff on my sites from now on. GO ASP+!!!

Friday, February 11, 2005

Bitmap manipulations.

I keep having online conversations with people who bemoan the fact that their computers have a spot of bother dealing with images with pixel counts of 10,000 * 8,000 or 13,000 * 18,000. They complain that the image scrolls slowly and they can't drag an image about but they obviously haven't the faintest idea of the implications of such an image.

One bright spark complained of the poor performance of an image having 13000 by 18000 pixels so I sat and did a little calculation which, I think, brings home just how much information is stored in such an image.

13000 * 18000 is 234000000. Multiply this by 4 for the bit depth of images stored in memory on the computer and you get 936000000 (936 million). Ok, A page of type in a programmere reference book runs out at about 88 characters by 36 lines. Thats 3168 characters per side and 6336 for a single sheet of paper.

On my bookshelves I have several books over 1000 pages in length and a thousand page book works out to be somewhere in the region of two inches thick. Our 6336 characters goes into 936000000 about 147727 times. This means a book with 148 thousand pages, 2 inches per thousand remember, is 24.6 feet thick.

Now I don't have many bookshelves in my house that are 25 feet long but if I had one I'd know that that was one fat book!

Even given the power of todays computers that's one huge chunk of info to mess around with. Why don't people understand that before they start whining about the scrolling performance of their image viewer application?

Friday, January 14, 2005

Still alive and kickin'

You may ask yourself "What the heck is that Bob Powell up to these days?" It's true that I've not paid attention to the Blog much because my work has just gone crazy and I've had months of pretty intense stuff going on and no time to report on other things.

I'm writing a book. This is not too surprising I suppose but what will really blow your socks off is that it's a book for Visual Basic programmers.

As a self-promoting guru I get vast amounts of mail from people who ask for help on every subject, not just Graphics, and I have seen a dire need for a book that explains how to create a correctly object-oriented architecture for VB users. So many people come from the VB6 world and discover that although VB.NET is syntactically similar the principles of software architecture in an object-oriented system such as .NET is a mystery. Consequently they make the most fundamental mistakes and create truly horrible applications that are beset with faults.

The book is entitled "Object Oriented Programming for .NET" and is presented in both VB and C# but with the emphasis on the VB angle.

The "day job" is keeping me well employed doing exiting things with public-address advertizing systems. I must say that there is rarely a boring day.

Keeping an amount of churn in the GDI+ FAQ and Windows Forms Tips and Tricks also puts a load on.

Finally WellFormed is on my list of important items and I'm working on a new system that will deliver content to subscribers via the web. I originally hosted the service with a provider who shall remain nameless but which SUCKED! and cost me too much money and time. Now WellFormed.NET is on Brinkster but the application that serves the content is even more work for my thinly spread working day. I have a prelim version working but it has bugs that I just don't have time to chase at this second.

I suppose being over-employed is better than being under-employed :-)

Saturday, December 18, 2004

I've been thinking of a plan

I recently started a beginners guide to GDI+ which seems to have put my site hits up by about 10%.

Are simple guides more popular?

Just thinking out loud here, which is what a blog is I suppose.

VAT collection by non EU sites

A question that has nagged at my brain in the last few months is "How do non EU companies collect VAT"
You see, VAT or Value Added Tax is europes form of sales tax. It is levied at a swingeing rate of up to 20% on some items so it can be a significant portion of the cost.

European union companies that collect VAT are obliged by law to put their VAT registration numbers on all invoices that charge VAT. Recently however I've noticed that many on-line shopping systems charge VAT to EU customers but don't put their VAT numbers onto the sites or on the invoices making me wonder if this is just a handy way for a US company to extort 20% of sales from europeans.

I have been charged VAT by Symantec and I don't know their VAT number for my company accounts.

Moreover, the rate of VAT isn't in line with the VAT rates in the countries to which they sell. This leads me to believe its bogus too because if there's one thing a tax man is keen on it's getting the percentages right. You see, the VAT rate for Symantec products is 20% no matter what EU country the purchase is made in. Does this VAT collected go into the general pool of VAT for Europe. I suspect not.

If I pay VAT I want to know it's going to the place it's supposed to go or I want a refund if the application of sales tax is just there to enrich the company because they think no-one has noticed.

Tuesday, October 05, 2004

Programming styles I use and why....

When the code you write is for the benefit of others, a clear and readable coding style is as important as a good prose style. This is my personal style guide which I use in all my programming.

#1 Naming conventions.

Concise naming is important if code is consumed by someone else. Cryptic names for variables, methods and properties are for obfuscators, not programmers.

Private Fields begin with an underbar and a lower case letter such as _data. Long names are CamelCased such as _myLongData.

Accessor properties remove the underbar and capitalize the first letter such as Data or MyLongData.

Variables in a method begin with a lower-case letter and are camel-cased if long such as "variable" or "myLongVariable". Parameters passed to a method follow similar convention.

Class names begin with capital letters and are camel-cased where necessary.

Class, method, property and field names are indicative of function wherever possible.

#2 Code layout

Code should be well laid out and as concise as possible.

Whitespace doesn't impact code size but can make a lot of difference to readability. Blank lines can be as informative as lines full of code.

Namespace items, classes, methods, properties and field groups are separated by white space to give separation between elements so that the eye automatically distinguishes between one section and another.

Braces are on their own lines and indentation is used to indicate nesting or scope. The exception to this rule is in the case of simple accessor properties where the get and set accessors may be defined on a single line. Acessors that are more complex than assigning the value to the associated private field follow the method brace and indentation rules.

#3 Comments

Be liberal and verbose with comments and documentation. Basic Inline XML documentation is a great help when studying a class library but little or no use when trying to get into the head of the programmer, even if that programmer may have been you two years ago. Remember the remarks and examples as well as internal comments

See the code example here.