some of the practices that we follow are listed below. This is not the complete list but represents a selection of the more important ones. Some will be followed by most 'good' programmers and some are practices that only the very best developers will employ.
The process of normalising a database is essentially the method by which a collection of independent data tables is turned into a properly constituted relational database. This eliminates certain common problems and errors from the database and enforces low level rules to enhance and protect the integrity of the data. For example a proper relational database should not contain duplicate information, thus eliminating re-keying of the same address multiple time with the possibility (or is that the probability!) that the address may vary in different places. All our databases are normalised to the TNF (Third Normal Form) which is the usual level to which a relational database should be normalised to (some may then be partly de-normalised for performance reasons).
This is related to the previous topic. Referential integrity is a means of protecting the integrity of the data in a database. Essentially it means that a record cannot be created or deleted until any associated records have already been created or deleted. For example an order header record could not be deleted if there were still order detail records linked to it.
Although not required for Referential Integrity it is also usually regarded as Best Practice not to physically delete records but instead to flag them as inactive. This has the added advantage that it protects the historical data and preserves it for future analysis.
you want to allow easy access to your data for authorised employees but prevent others from viewing, or even worse, altering or deleting, your data. Additionally you want to reduce the possibility of authorised users making a mistake and accidentally changing data that they did not mean to. There are various ways in which security can be enhanced such as:
It is quicker and cheaper to write an application with low security than one with high security. It all comes down to how much you value your data. What would be the implications for your organization if your data was stolen, deleted or, even worse, altered, either accidentally or maliciously? For many the answer will be that the very existence of the organization might be threatened. For this reason security is not an option for most organisations.
Error Trapping & Logging
All VB or VBA code routines should include error trapping as standard. If an error occurs in a routine without error trapping then either the system will crash or, often worse, a misleading error message might be displayed.
Error Trapping is all well and good but to follow Best Practice then any errors that do occur should then be 'logged' so that they can be examined after the event. This involves writing the details of the error away to either a table in the database or to an external file. It should also include a range of other information in addition to the basic error details. Depending upon the situation we log Error Number, Error Description, the module & routine where the error occurred, Date & Time, User Name, Application Version Number, Computer Name, Client Operating system, Service Pack release, Memory, Disk Space, Screen Resolution, etc. All of this information can be a huge help in diagnosing problems more quickly (most developers will have had users phoning up and saying that the system gave an error message.... but they couldn't remember what the message said or what they were doing at the time!). In addition it also allows us to be more proactive; examining the error log for any errors that might have occurred but have not been reported.
With some of our more critical systems we also install automated processes that e-mail the error log to us on a regular basis so we can continue to check it regularly.
Version Control is all about checking that the application you are using is the current one. If, for example, the database itself was on version 2.3.1 but one of the clients was using a 2.2.0 front end then their system may crash or introduce errors into the data. This can be a major headache for many systems, especially ones with physically dispersed locations. Where appropriate our systems include version control mechanisms to check that the front and back end are on the same version. Many then go on to automatically transfer the latest version from a master copy on the server, often without any user intervention being required. For one of our customers, having 100 remote users working away from the office, this changed an update task from a labour intensive month long process to one that happened automatically the next time each user logged in
Change Control is all about logging changes made to a running application; Why the change was made, When it was made, What the new version number is, Who made it, Which parts of the system were affected, etc.
If you later detect new errors or inconsistencies it is very helpful to be able to review any changes that may have affected that part of the system. On systems with lots of users it is also of benefit to the helpdesk or IT support.... "ahhh yes..... I can see that this problem was fixed 1 month ago in version 2.3.5."
Table Driven Design
a programmer developing something quickly will tend to 'hardcode' data into the application. Take the example of a salutation (eg Mr, Mrs, Miss, Ms). Someone who does not follow Best Practice would tend to just type these options directly into a drop-down list control on a form. After all, it's very quick and easy to do! However what happens if you want to add a new option, say 'Dr' or 'Professor'? The user would have to contact the programmer, wait for them to become available, the programmer would then have to modify the program, test it (or maybe not!) and finally issue the new version to all users. The next week 'Reverend' may need to be added so the whole cycle would have to be gone through again! And what happens if the same data is needed on several different forms? In this scenario it is all to easy to enter a value into one form that is not acceptable elsewhere in the application because someone forgot to update the list on a second form.
Best Practice says that the data should NOT be coded into the application but should instead be held in a table in the database. When the form is opened the table of salutations is dynamically read into the drop-down list so it is always up to date. If a new salutation is required then the user just has to add a new entry into the salutation table. The latter also means that an additional data maintenance form is required to allow the user to do this easily. Perhaps now you can start to see why Best Practice is better and cheaper in the long run but can take longer and cost more to develop initially. If one was being cynical you could also say that by not following Best Practice a developer can create an ongoing revenue stream from all of the minor amendments and enhancements that will subsequently be required!
a naming convention establishes a standard for the names given to forms, tables, queries, variables, etc. It's use helps any developer looking at a database to understand it's internal details much more quickly than would be the case if a naming convention has not been applied. For example if a programmer comes across a variable called mlngOrderNumber they can immediately tell a number of things from the name. The 'm' prefix tells you that it is a module level variable which is visible across all routines within the relevant module. The 'lng' part then tells you that it is a 'long integer', ie that it holds numerical, integer values within a defined range. The remainder is a meaningful name that describes the contents of the variable, "OrderNumber" in this case. Note also the use of upper and lower case to highlight the different parts of the name which makes it easier to pick out when scanning down a page of code. The same applies to objects within the database; tables should start with tbl (e.g. tblOrders), stored procedures with sproc (not sp_ which has a special meaning to SQL Server), forms with frm, queries with qry, etc. (the exact prefixes used are not too important provided they are easily understood and are applied consistently).
All of this makes an application with a naming convention much easier to understand to another programmer. In practice a system with a good naming convention can be up to twice as fast to modify as one without. Additionally it makes migration of the data to another system much easier. For example we often see systems where a field in a table has been called, say, "Date" or "Order No;". Best Practice says that they should be called "OrderDate" and "OrderNo" respectively. In the first case the name used is a 'reserved' word that has a special meaning, in the second the space and the punctuation characters can cause complications. Whilst the first set of names may work in the existing application, if you were to export or upsize the data to another database system there is a high likelihood that the names would be declared illegal and hence the export or upsize would fail. With Best Practice we avoid names like this and hence also avoid problems that may not become apparent for several years.
Code should be laid out in a consistent, logical and readable way. This means using indentation and spacing to highlight the underlying programming logic (IF's should be aligned with their corresponding ELSE and END IF; code inside the IF should be indented so that it can be clearly seen which code is executed if the IF condition is true, etc.
In addition to the layout of the code, it's content should also follow Best Practice. Typical examples of this include:
All variables should be dimensioned (declared) in a single block near the start of any routine that uses them and each should be dimensioned on a separate line.
Where a CASE statement is used then a CASE ELSE should also be used to catch unexpected conditions.
Data types should always be explicitly declared, for example in function arguments and the function return value.
All objects, such as record sets and automation objects, should be closed after use (and we generally also 'set' them to 'Nothing' as well).
Comments should be used where they would enhance the understanding of a section of code.
A good example of Best Practice can be seen when a new record is inserted by a SQL Server Stored Procedure. The ID of the newly inserted record is then frequently retrieved for use in further processing. The common way of doing this is to use the built-in @@Identity function (which was the only way of doing this in previous versions of SQL Server). Your system might then function happily for the next year or so until someone decides to audit the table that your stored procedure is adding to and adds a trigger to it. Peculiar things then start to happen with data being linked to the wrong record. The reason being that the audit trigger changes the @Identity so it points to the 'wrong' record. We therefore use Scope Identity () to forestall this possibility from happening in the future.
A well-known acronym in computing is GIGO, meaning Garbage In, Garbage Out. A database is only as good as the information that it holds so it is important to try and trap as many mistakes as you can during data entry. Some examples of data validation that we may employ are date range checking (for example a new 'ToDo' action cannot be scheduled earlier than the current date), ensuring that a reference code is in the correct format, making sure that silly or wrongly typed values are not entered (for example in an ordering system, if the usual order quantities was below 10, then we may put up an 'Are You Sure?' message if more than 50 was entered).
When dealing with Access systems a re-attachment routine is needed so that if the location of the back end database is moved then the system is resilient enough to recognize this and asks the user to select, via the standard Windows file find screen, where the database has moved to.
Warm Standby Server support
With SQL Server databases and .NET front ends we include automated standby server failover functionality in the client. This means that if your primary database server becomes inoperable the application will try to connect up to a designated secondary server. Even if you do not initially have a standby server this functionality is still there and waiting should you decide to add one in the future.
Designed With Maintenance in Mind
We design all of our database applications with future maintenance in mind. For example we assume that we may be asked to undertake an enhancement to the application in five years time, when we have forgotten most of what we originally programmed. This means that if there is something obscure in the code we would ensure that we had commented it properly to explain how it worked and why certain decisions were made, we use indentation to highlight the code structure, we don't create huge unwieldy routines but break them down to a manageable size, we group related routines together in logical blocks, for SQL Server applications we have one connection routine that is used by the whole application rather than having dozens of separate connections scattered all over and, of course, we follow all of the other Best Practices outlined here.
Best Practice is all about thinking about the user's requirements, ensuring the integrity of the data, making the users life easier, taking account of what will happen in practice, considering future changes and enhancements and working to produce a robust, reliable and flexible system that is easy to maintain.
It is only because we work to Best Practice that we are happy to give a Lifetime Guarantee for our software!