Making Life Easier

Every developer worth their salt has little snippets of code that they use to make their life easier.

So today, I thought I’d share a little utility that I use all the time.

Readers, meet RL. RL, meet readers. RL (Run Line) allows you to run a single line of code to see what it does. Typically, I use this as a quick way to check out an OCONV express I haven’t used in a while or as a calc replacement if I don’t feel like tabbing out of the terminal. All you need to do is compile and catalog to enjoy its simpleness. RL supports 1 option, ‘-H’. This option allows you hide the compiler output if you wish.

To use RL, you can either enter the code as part of the command line arguments, or you can enter it as an input. Here is a screenshot of RL in action:

RL Utility

RL Utility

 

Disclaimer: I strongly recommend against implementing this on a production machine as it will allow arbitrary code execution. This code has only been tested on UniData 7.2.x. Feel free to use this code however you want. If you somehow turn this in to something profitable share the love and either buy me a beer or make a donation to an open source project.

Okay, so enough with the spiel. Here’s the code :
(Updated 2011-06-02 due to ‘<-1>’ being stripped)


 

EQU TempDir TO "_PH_"
EQU TempProg TO "SNIPPET"

OPEN TempDir TO FH.TempDir ELSE STOP "ERROR: Unable to open {":TempDir:"}"

* Determine if we should hide compiler output
* Also determine the start of any command line 'code'

IF FIELD(@SENTENCE, " ", 2) = "-H" THEN
   HideFlag = @TRUE
   CodeStart = COL2() + 1
END ELSE
   CodeStart = COL1()
   IF CodeStart = 0 THEN
      CodeStart = LEN(@SENTENCE) + 1 ;* Force it to the end
   END ELSE
      CodeStart += 1 ;* Skip the Space
   END

   HideFlag = @FALSE
END

* Get the code from the command line arguments, or
* Get the code from stdin

IF CodeStart <= LEN(@SENTENCE) THEN
   Code = @SENTENCE[CodeStart, LEN(@SENTENCE) - CodeStart + 1]
   Code = TRIM(Code, " ", "B")
END ELSE
   PROMPT ''
   CRT "Enter Code: ":
   INPUT Code
END

* Compile, catalog and run the program
* We only catalog it so that @SENTENCE behaves as you would expect

WRITEU Code TO FH.TempDir, TempProg ON ERROR STOP "ERROR: Unable to write {":TempProg:"}"

Statement = "BASIC ":TempDir:" ":TempProg
Statement<-1> = "CATALOG ":TempDir:" ":TempProg:" FORCE"

IF HideFlag THEN
   EXECUTE Statement CAPTURING Output RETURNING Errors
END ELSE
   EXECUTE Statement
END

EXECUTE TempProg

* Clean up time

DecatStatement = "DELETE.CATALOG ":TempProg
EXECUTE DecatStatement CAPTURING Output RETURNING Errors

DELETE FH.TempDir, "_"TempProg
DELETE FH.TempDir, TempProg

STOP

Installing UniData on Fedora 14

May 30, 2011 5 comments

For some future upcoming posts, I needed to install UniData on a Linux Machine.

Since I’m already going through the effort of freshly installing both Fedora and UniData, I thought I would share required steps so anyone else who wanted to create a similar test system can do so just as easily. It turns out to be quite simple and straight forward, with only minor set up tasks along the way.

Firstly, I suggest you do this in a Virtual Machine so that you can create as many dedicated test systems as your heart desires (or storage limits). For this I’ve used Sun’s Oracle’s Virtual Box which is available for free. To make it easier, I’ve also got instructions for the few extra preparation steps you will need to do the Fedora installation in Virtual Box

Requirements

Okay, so to start, let’s make sure we have everything we need to do this:

  1. Suggested: Dual Core CPU or better (particularly if running as a VM)
  2. Suggested: 1GB RAM or better (particularly if running as a VM)
  3. Virtual Box software
  4. Fedora 14 ISO
  5. UniData Personal Edition for Linux

Preparing the VM

After you have installed Virtual Box and have it running, we will need to create a new image to run Fedora. Doing this is as simple as clicking the ‘New’ button and follow the prompts. Most questions can be left as is, except for the operating system. For the operating system, set it to ‘Linux’ with version ‘Fedora’.

The default 8GB Dynamic disk is just fine. You can always create and add more disks later.

Now that you have your machine image ready, select the image and click on the settings button. In this screen click on the storage option and select the DVD drive from the IDE Controller. On the right-side there is a small CD/DVD image you can click on. This will let you select the Fedora 14 ISO you downloaded so that it will boot from it.

While in the settings screen, you should also add a shared folder and click on the read-only and auto-mount checkbox options.

Installing Fedora

Fedora 14 VM for UniData

Fedora 14 VM for UniData


If you are not installing this as a virtual machine, you can burn the ISO image to CD/DVD and start the machine with the CD/DVD in the drive. Only do this is you know what you are doing or are intending to have the Fedora as the sole operating system.

If you are installing this as a virtual machine, select the VM image and click on the start button.

Fedora should auto-boot from the Fedora image. Once it has loaded and is sitting at the desktop, there is an ‘Install to Hard-Disk option’. Click on this and simply follow the installation instructions Fedora provides

Installing UniData

Before you can install UniData on Fedora 14, you must first install the libgdbm.so.2 library. You can download and install the RPM for libgbdm.so.2 here

Apart from the above missing dependency, it is as simple as following the installation manual provided by Rocket Software.

The only other point of note from the initial installation is that not all the escape characters in udtinstall are processed correctly, so expect to see a few lines like “\tWould you like to continue?”

Now you will need to set up the environment variables you will need. To do this, ensure you are in a shell as root or that you run these commands as root. Change to the /etc/profile.d directory. In here we are going to create a unidata.sh file that will contain all the environment variables UniData requires.

Just type in ‘gedit unidata.sh &’ to bring up a text editor (or just use vi/emacs) to paste the following into:

   UDTHOME=/usr/ud72 ; export UDTHOME
   UDTBIN=$UDTHOME/bin ; export UDTBIN
   PATH=$PATH:$UDTBIN ; export PATH
   LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$UDTBIN ; export LD_LIBRARY_PATH
   UDTERRLOG_LEVEL=2 ; export UDTERRLOG_LEVEL

Restart the machine or run the new script as root and you should be able to run ‘startud’ as root. If UniData boots up correctly, open a non-root shell and type in ‘cd $UDTHOME/demo’ then ‘udt’ and you should successfully jump into ECL.

There you have it, a working UniData server running in a Virtual Machine

Disclaimer: This does not create a UniData server that will be appropriate to run as a production server.

Categories: Database Tags: , , ,

U2UG Elections 2010 – Request For Comment

March 15, 2011 2 comments

To the nominees in the U2UG Election, as well as the incoming President,

Let me first state that I’m extremely appreciative of the time and effort that the members of the U2UG Board have and are putting in for the benefit of the community. For a community to be sustained and grow, the community needs leaders to give it direction, to foster innovation and to support facilities that enable the community to learn, share and interact.

Without a community of active developers and product champions, a development stack will stagnate, no new solutions will emerge and hence employment opportunities for those skilled in the stack will eventually diminish.

So it makes sense that when we have the privilege of voting for community leaders, an educated decision based not only upon their credentials, but the direction they aspire to lead the community in, that we should make the most of out.

For these reasons I have a few questions for the two gentleman who have nominated for Vice President (Charles Barouch & David Jordan). Understanding the incoming President’s (Laura Hirsh) thoughts on these questions would be also be beneficial.

  • What do you see as the most important role of the U2UG?
  • How do you see the “International User Group” supporting existing local user groups and helping establish new local user groups?
  • What specifically do you think will increase the active member-base of the U2UG and how do you intend to monitor this?
  • What do you think can be done to attract new developers & ISVs to U2 and where do you see the role of the U2UG in this?
  • What do you hope to achieve by the end of this term if you are elected and how do you see it benefiting the community? How will you measure your success in this?


I understand this is a lot of questions to answer in the short time before the voting closes, but your answers will us understand vote exactly we are voting for.

Regards,
Dan

PS: Should the two nominating for ‘Member at Large’ wish to answer, their thoughts would be greatly appreciated as well.

Categories: Community Tags: ,

New Developer Zone – U2 PHP PDO Driver

March 2, 2011 1 comment

Rocket U2 Developer Zone

 

I spent last week at the U2 University in Sydney and had a great time. During the opening keynote speech, Rocket announced the new U2 Developer Zone.

Great news! Finally a public site for developers that links all the resources you would expect. White papers, podcasts, demos, links to manuals, personal editions of the database servers. Not just a public site, but a public site for developers from Rocket itself. That’s what we needed, strong, visible vendor support of the development community.

It is still a bit rough with a fair amount of content missing, but it has enough in there to make it worth signing up (free) to check it out.

It breaks the site down into 4 key areas.

  • Ignite
  • Launch
  • Accelerate
  • Dock

Ignite is aimed at new players and features explanations of what Multi-Value Databases are, some information about U2 as well as summaries of the Developer & Admin tools available for download.

Launch works on getting a developer up and running as quickly as possible with instructions and links for downloading and installing both the UniVerse/UniData servers, as well as their 4GL tool – SB/XA. A bonus is some professional looking video tutorials for getting them up and running.

Accelerate is focused more on in-depth content of the system with various articles and tutorials that have been produced by Rocket as well as some community figures as well.

Dock appears to be aimed at the forming a community/developer collaboration. It has links to U2UG as well as Rocket U2 on Facebook and Twitter (even though the twitter link is missing on the site at the moment). It also has a message board, but this appears to be one of those unfinished features for the time being.

One point of disappointment at the moment is ‘The Wall’ it throws up to get any content. It requires you to sign-up and log in before you can actually access the content. While I can appreciate their probable reasonings for this and appreciate it is still free, I believe this is one of those things that will prevent those who stop by from search results/ideal curiosity from actually getting involved.

By throwing up a wall, instead of openly allowing read-only access, it has a 2 fold effect. First, google (and other search engines) will not be able to correctly index the content. In an age where > 90% of website traffic generally comes from search engines, this is definitely not ideal. The other negative effect is that the bounce rate of people not currently involved will surely be higher.

Hopefully they will review this decision and decided upon a more open and effective path.


 

U2 PHP PDO Driver

 

So, my title indicated something about a U2 PHP PDO Driver and you were not mislead. While at the U2U Conference I had the pleasure of, among others, speaking with Jackie from Rocket Software. At one point the conversation turned towards dynamic languages and in particular, PHP. I was told that some tutorials had actually been written on getting PHP to natively connect to U2 and should be able to be found on the new developer site. Bingo!

After some quick searching on the site, I present you 2 links so you can build your own native connector between PHP and U2:

Hopefully you find this useful!

Data Integrity

February 27, 2011 Leave a comment

One of the features not present in UniData that you many have become used to in the world of SQL is referential integrity.

Data is one of the most valuable assets of a company. If only for this reason alone, it should be treated with the utmost respect and professional care. Everybody knows that backing up data is essential, but what data are you backing up?

If the data is already corrupt you’re in a whole world of hurt. How long has it been it corrupt? Has it corrupted other data? Has it already impacted the business and to what extent? You can’t just restore from several months ago. You have to spend the time manually working out what went wrong, how to fix and potentially trawling through backups to find data to reinstate.

Here I should clarify exactly what I’m referring to by ‘corrupt data’. I’m not talking about OS-level corruption; from here on I will be talking about 2 types of logical corruption:

Unlike the major databases (such as MSSQL, Oracle and MySQL) UniData and UniVerse do not have logical data integrity constraints supported in the database layer. This leaves it up to each individual application to ensure data integrity.

Anyone working with databases knows that bugs (both current and of old) can result in logical inconsistencies creeping into your data. The more denormalised your data, the higher the chance for this corruption.

Some of this corruption will become apparent immediately because a process will fail and require you to locate and fix both the cause of the corruption as well as the corruption itself. Surprisingly, these are not the ones you should be most worried about. The worst are the ones you don’t notice, because they don’t cause the system to visibly malfunction. These are the worst because they can fester in your system for years, silently corrupting data that is derived from it and potentially impacting business decisions. Soon the data itself will become much harder to repair since needed information may no longer be readily at hand. If/when these eventually cause a problem, it will be much harder and time-consuming to address, if even possible.

Since we have to handle logical data integrity entirely in the application layer, U2 databases are somewhat more susceptible to these issues from code bugs. To combat this, there are 2 methods I propose you adopt.

The first is a Data Integrity Audit (DIA) you can schedule regularly in production. This validates your data and reports on any inconsistencies it encounters. This helps you identify issues earlier and potentially help track down the programs/conditions that are causing the corruption. We have already implemented this system for ourselves and I’ll explain how we did it below.

The second method is based on the above DIA. Modifying it to run from file triggers, you can implemented a system to use while testing (Unit, System and at User Acceptance Testing) that can report exactly what program/line is writing the corrupted record as it happens. Catch it BEFORE it reaches production! However, I don’t recommend actually implementing this into production (at least, without great care/load testing) since it will have performance implications that may be unacceptable.


Implementing a solution

Alright, enough of the prelude. Lets talk about implementing a DIA program in to your system. It isn’t as hard as you might think and it can be set up incrementally so you can cover your most important data first.

The system has 4 parts to set up:

  1. Defining the Rules
  2. Storing the Rules
  3. Checking the Data
  4. Reporting on Violations

Defining the Rules

The first step is the logical rules that should be constraining your data. The rules will fall into 2 categories:

  • Referential integrity: Identify any attributes that are foreign keys (or lists of foreign keys)
  • Domain integrity: Specify the ‘domain’ of the field. This includes type (alpha, numeric, etc), enumerations, length, and if NULL is allowable.

Looking at a few of your key tables, you should be able to quickly identify some basic rules your data naturally should abide by. Write these down as these will be some easy rules to start testing.

Storing the Rules

The second step is determining how to store the rules. Although you can do this however you want, there are several reasons that make using the dictionary file ideal:

  • Placing the constraints in with the schema (both are structural metadata). Collocation is a good thing.
  • Attribute 1 can store anything after the type; it allows you to store the constraint directly with the section of the schema you are constraining!
  • X-Type Attributes allow you to use enumerations (part of domain integrity) while still keeping them defined in the schema, instead of elsewhere.
  • It allows you to easily test and/or enforce the constraints with the ‘CALCULATE’ function (more on this later)

So, how exactly do you store the constraints in with the dictionary records? Here is the following scheme we use:

TYPE [FKEY filename [PING f,v,s]] [MAND] [ENUM enum_item]

  • FKEY: Foreign key to ‘filename
  • PING: Checks for @ID in the foreign record location <f,v,s>
  • MAND: Value cannot be NULL
  • ENUM: Value must be an enumeration in the dictionary X-type record ‘enum_item

When attribute 6 of the dictionary item indicates that the data is a multivalued list, FKEY, MAND, ENUM and DATATYPE should adhere to it and treat the each item in the list separately. The only special case is MAND, which only causes a violation when a multivalue in the list is empty. That means it does not cause a violation when there is no list at all. If you want to cover this you can create another non multivalued dictionary item as well and apply the MAND rule to it.

Checking the Data

The third part is how you will test/enforce these constraints:

  • Production: A program, that given a filename, reads in the dictionary items and associated constraints. It can then test each record and report any violations. This would typically be run as part of a nightly job, and/or if you are set up for it, on a backup/restore of production onto a development machine.
  • Development: An update trigger subroutine that is only implemented on development. This also allows you to transparently test if new or modified code is corrupting your data before it even makes it into production. Although this would typically not be implemented into your actual production system due to performance impacts, there is no technical reason that it cannot be done if so desired (even just for selected files)

These methods are not mutually exclusive and are designed to cover different situations. The first is a post corruption check that allows you to identify issues faster than you normally would. The second allows you to provide better test coverage and reduce the risk of introducing faulty code into your production system.

Reporting the Violations

The fourth and final part of the system is how you report it.

There are many options you many want to consider depending on your needs and which of the 2 options above you are considering it for.

We decided upon a non-obtrusive option that allowed us to build either reports or select lists from the results. This method requires you to create a new file to store the results. For the sake of this article, let us call it DIA_RESULTS. You can clear this file just before running the DIA program, or performing tests if you are using the trigger method.

In DIA_RESULTS, each record should contain the following information:

  • Date failed
  • Time failed
  • Filename the violation was on
  • Key the violation was on
  • Dictionary item used when the violation occurred
  • Rule name the violation occurred on
  • The value that caused the violation (just in case it changes before you get to it)
  • If from a trigger, the current call stack

Using this information it is easy to print off reports, create select lists to get to the records and to determine exactly what was wrong in the data.

Cheat Sheet: UniBasic Debugger

October 30, 2010 Leave a comment

I’ve just completed the first release version of a small cheat sheet for using the UniBasic Debugger in UniData. Please give feedback if you find any typo’s, misinformation or just if you find it helpful and what to let me know. :)

Preview - Debugger Cheat Sheet

Preview - Debugger Cheat Sheet

Click below to go to the page with the PDF available for download:

UniBasic Debugger Cheat Sheet v1.0.0

I’m currently making a few others, so if you like the idea, stay tuned over the next couple of months.

The problem with numbers

October 11, 2010 4 comments

UniData, like other weakly-typed systems, makes some programming tasks easier by not needing the developer to declare and adhere to a data type with variables. The general pros and cons of this have been debated many times across many languages and hence will not discussed here. What will be discussed is specific cases where this can cause you unexpected headaches.

A posting was made on the u2ug forums by Koglan Naicker about some unexpected issues when finding duplicates in a data-set.

In short, he found that when strings contained large numbers, it would sometimes incorrectly evaluate two different strings as equal. For example:

IF '360091600172131297' EQ '360091600172131299' THEN CRT "Equal"

The above code results in “Equal” being displayed on the screen. This is caused by a combination of 2 factors.

The first being that UniData is weakly typed. This means that it does not explicitly distinguish between strings and numbers, but attempts to determine the data type by examining the data. In this case, since the strings are numeric, it automatically treats them as numbers.

The second part of this issue is because now that it is treating those 2 strings as numbers, it needs to handle them in an appropriate data type on the CPU. Since the 2 strings are too large to be treated as an integer, they get converted to a floating-point number. Due to rounding that occurs, this actually results in both of these strings being converted to the same float-point representation! A better method may have been to use something such as Bignum instead of converted to floating-point. There would be a speed trade-off, but surely that would have been better than potentially incorrect programs.

Some people suggest prefixing or appending a non-number character to each string to force them to be treated as a string. Not entirely elegant and can have performance implications. Fortunately, UniData does have proper functions to handle these situations. In the case where you will be comparing strings that may consist of only numeric characters, you should use the SCMP function. This function compares two strings as strings, regardless of the actual data in them. Using this when you need to be certain how the comparison is performed can save you a lot of headaches in the future.

Also of interest is that this issue doesn’t just apply to UniBasic, but can also affect UniQuery!

It should be noted though, this only affects UniQuery when the dictionary item is right-aligned with the format field (eg, 20R in attribute 5).

You can tested this by creating a file and creating 3 records with the @ID of ‘360091600172130474’, ‘360091600172131297’ and ‘360091600172131299’.

Now, select upon the file where the @ID = ‘360091600172131297″ and you can see that 2 records are returned!

Results of selection

Non Unique Select

When explicitly selected a record via a unique key, this isn’t the result a database should return.

So, when dealing with large, potentially numeric fields with UniQuery, you may need 2 dictionary items. A left-aligned one for selecting on and a right-aligned one if you require numerical sorting.

%d bloggers like this: