Optimising the set up of your UniData data [Part 1]

September 3, 2010 1 comment

One of the benefits with U2 Data servers is that it can be extremely quick to turn-around a new system. The unfortunate downside is that this makes it extremely easy to ignore the architecture of your system. This can lead to future system performance issues and harder to maintain programs.

Here I’ll be looking at the set up of your files and records (tables and columns for those still grasping UniData/UniVerse). Your system revolves around your data, so if you don’t get it right to start with it inevitably leads to a sub-optimal system. What I won’t be discussing here is the usual modulo/block-size related maintenance of your files; there is already literature in the manuals for this topic.

To start with, you should have already read my previous post about correctly setting up the layout of your files and the need to create all the relevant D-type dictionary items. With that in mind, I have a story for you…

This story is about Johnny and Alicia, who are both admin staff working for a sales company back in the 1930’s. Both have a large set of contracts that they store in folders in a filing cabinet.

Occasionally their managers will ask them to find a contract that is being handled by a certain sales rep. Although they hate this task, each time they manually search through the stack of contracts to retrieve it. Funnily enough, in the time it takes Johnny to find one, Alicia can usually find at least two.

Curiosity gets the better of Johnny who eventually asks Alicia how she was so fast.

“It’s easy, I have moved the page with the sale rep’s name to the front of the contract”

Dang! So simple! Johnny realised having to dig ten pages deep on each contract was so senseless!

Fortunately, admin staff can now use digital retrieval systems, so they don’t have to think about this sort of small detail any more. The need to pay attention to this detail hasn’t gone away though. Now it rests with us.

Not only should you ensure the layout of data is in the correct format, but you should also pay attention to the order of your data. It should be organised with the most frequently searched upon and utilised data earlier on in the record. Since the record fields are separated by delimiters, using and querying later attributes requires the engine to scan every character up until to the requested attribute to determine where it starts. By moving the most frequently used data to the being of a record, you reduce the amount of work required to initial find the data.

Here are some timings from a simple test run I performed on our system.

The setup: A file with modulo 10007, pre-filled with records keyed from 10000 to 99999. Attributes 1, 2, … up until 29 are each set to the key. I have created a D-type attribute for each one timed (D1, D2 & D29).

The test: Perform a select on the file with the attribute equal to a value (E.g. SELECT TIMINGS WITH D1=”12345″). Repeat this 1000 times for each attribute tested.


Timings for 1000 SELECTs


Data in <1>: 338655 (100.00%)
Data in <2>: 342134 (101.03%)

Data in <29>: 471811 (139.32%)

Even with these small records, you can see the difference you can achieve by having your data in the correct order. Scale this up to larger files with bigger records, more complex select statements combined with the processing of these records in your subroutine and it can provide a significant difference in the execution times across a system.


Open Source: Some Positive News

August 12, 2010 Leave a comment

In case you haven’t noticed, open source has exploded into the mainstream and the profitable band-wagon that has built up revolves around “setup & support”, “customisation” and “enhanced enterprise editions”.

Yes, a lot of those companies do not solely handle FOSS projects, but it is a valuable part of their business.

The best part of FOSS is that because it is free and readily available, the potential people who will be exposed to the product is greatly increased. With MV-style databases largely unknown (and not understood), having more people aware of the technology can only improve the scene for us that work with it. The more companies using it means more jobs. Who can argue that against that?

That’s why Brian Leach’s announcement at the end of July is such a positive step for the community at large.

mvScan was originally a tool that I had developed for my use, to document a UniVerse system by iterating through the account and file structures, building impact maps and filling out tables with information culled from the entries found to make it easier for someone to search through their system.

So I’ve decided the best way forward is for me to open it up. That way, people who want to run it on their systems can do so and feed back any updates and changes that result from applying it to their specific structure and code organization.

So watch this space for announcements. If this goes well, there’s plenty of other stuff I want to open source.

You can read more on mvScan at Brian’s site.

I don’t know about you, but I’m looking forward to the release.

Crouching Null, Hidden Bug

May 2, 2010 1 comment

Null, (actually just an empty string “” in U2) is a valid value. Normally, it would be treated exactly the same as other normal values, such as 1 or “1”, but it isn’t always.

I’ve seen a few bugs that have been created by not understand the differences in how nulls are treated. When debugging the code can look completely valid as well, meaning it takes even longer to identify and rectify the issue.

Okay, lets see how you go. I’ll give you a few series of records, each created with the same data. Your job, is to work out which records in the series will exactly match their ‘CONTROL.REC’. Good luck and try to do it without needing to compile the code!

Series 1:

 CONTROL.REC = "" : @AM : "A"


 REC(0) = ""
 REC(0)<2> = "A"

 REC(1) = ""
 REC(1)<-1> = "A"

 REC(2) = ""

 REC(3) = "A"
 INS "" BEFORE REC(3)<1>

Series 2:

 CONTROL.REC = "A" : @AM : ""


 REC(0) = "A"
 REC(0)<2> = ""

 REC(1) = "A"
 REC(1)<-1> = ""

 REC(2) = "A"
 INS "" BEFORE REC(2)<2>

 REC(3) = ""

Series 3:

 CONTROL.REC = "" : @AM : "A" : @AM : ""


 REC(0) = ""
 REC(0)<2> = "A"
 REC(0)<3> = ""

 REC(1) = ""
 REC(1)<-1> = "A"
 REC(1)<-1> = ""

 REC(2) = ""
 INS "" BEFORE REC(2)<1>

 REC(3) = ""
 INS "" BEFORE REC(3)<3>

Series 4:

 CONTROL.REC = "A" : @AM : "" : @AM : "A"


 REC(0) = "A"
 REC(0)<2> = ""
 REC(0)<3> = "A"

 REC(1) = "A"
 REC(1)<-1> = ""
 REC(1)<-1> = "A"

 REC(2) = "A"
 INS "" BEFORE REC(2)<1>

 REC(3) = "A"
 INS "" BEFORE REC(3)<2>

Series 5:

 CONTROL.REC = "A" : @AM : "" : @AM : ""


 REC(0) = "A"
 REC(0)<2> = ""
 REC(0)<3> = ""

 REC(1) = "A"
 REC(1)<-1> = ""
 REC(1)<-1> = ""

 REC(2) = "A"
 INS "" BEFORE REC(2)<2>
 INS "" BEFORE REC(2)<2>

 REC(3) = "A"
 INS "" BEFORE REC(3)<2>
 INS "" BEFORE REC(3)<3>
 REC(4) = ""
 INS "" BEFORE REC(4)<1>

Series 6:

 CONTROL.REC = "A" : @AM : "B" : @AM : "C"


 REC(0) = "A"
 REC(0)<2> = "B"
 REC(0)<3> = "C"

 REC(1) = "A"
 REC(1)<-1> = "B"
 REC(1)<-1> = "C"

 REC(2) = "A"

 REC(3) = "A"
 REC(4) = "C"

Did you get them all? I’ve put the answers as a comment so you can check them if you want. I’d be surprised if you got them all right…

So, what should you take away from this? <-1> and INS can give you (or others!) a world of headaches if you don’t understand their peculiarities with null values.

Final Note: In UniVerse INS behaviour for some cases is dependant on the flavour your account is running in and the $OPTIONS EXTRA.DELIM setting.

Improving U2 Security

April 11, 2010 3 comments

The general IT knowledge of security has come along way in the last 20 years. Even more dramatically when considering the last 10 years.

People are generally aware that unless due care is taken, their computer could be injected with a virus, have personal information stolen from it or even be used to facilitate crime. Major OS Vendors have picked up their game and now are putting in a better attempt to prevent compromises from the OS level. Sure, you still hear the odd story about the latest privilege escalation, but compared to what it use to be…

Network level security has been given most of the attention (and IT budget funding) and is *generally* fairly secure these days. Application level is where most of the major hacks are happening now, but unfortunately, corporate uptake on securing their systems at the Application level hasn’t been as good as it was with the Networks.

Let’s be honesty and not undersell ourselves, securing complex applications is no mean feat. It takes knowledge, planning, lots of time & patience and sometimes out-of-the-box thinking. Thankfully, most modern programming languages and Database Management Systems do the heavy lifting for us. From the security features built into C# and Java to the vastly improved safety net found in SQL engines with fine-grained access control and in-built functions for preventing SQL injection, a lot of the basics have been solved.

This is where the U2 family has a few gaps to be filled. UniBasic needs some inbuilt functions for sanitisation, UniObjects needs some form of access control built around it and UniQuery/RetrieVe prepared statements/stored procedures would be nice.

With the increase push in integrating U2 servers as databases for modern front-ends such as web applications, data sanitisation is going to become a prevalent topic in the community. Built-in functions for UniQuery/RetrieVe, SQL and HTML sanitisation/encoding would be welcome additions to the UniBasic command family. Even better would be some form of prepared statements for the query languages. This make it simpler and easier to obtain better program security.

UniObjects is touted as a standard method of connecting GUI application front-ends to a U2 back-end. However, due to the limited access control supported by UniObjects, it is a dangerous hole in your system to have the required port open for anything other than back-end servers. Take into considering user ‘X’. User ‘X’ has appropriate login credentials for the old green screen system. IT brings out a new Windows GUI application, lets say for reporting, that runs on the user’s machine and uses UniObjects to connect to U2. In the old green screen system, User ‘X’ was limited to set menus and programs to run and could not get access to ECL/TCL. With enough knowledge (and malice), User ‘X’ can now freely use his green screen login credentials to log into the U2 system via UniObjects read/write records directly and even execute raw ECL/TCL commands.

So what exactly is the problem with UniObjects? Quite simply put, it has no fine-grained server-side control of what actions can be done, or commands issued via UniObjects. As long as you can log in, you can get a free pass to the back-end’s data. Let’s take MsSQL as a counter example. You can create views, stored procedures, grant or deny users a suite of privileges to tables and commands. Essentially, UniData needs to be able to have some access control scheme for UniQuery that allow you to define whether the users and read/write records in certain files. Ideally, all read/writes would be done through U2 UniBasic subroutines, with RPC daemon having the ability to have a command ‘white-list’ setup. That way, all data access can be moderated with UniBasic code and the RPC daemon having a white-list that only allows access to calling those subroutines.

All this highlights an issue we need to overcome as a community. The lack of U2 specific security literature. Where is the UniData/UniVerse security manual? Where is the “Top 10 common security mistakes” for U2? Sadly, security does seem to be an afterthought. Sometimes even a ‘neverthought’.

Security is not Obscurity. Even in U2 [Part 3]

March 1, 2010 1 comment

A few years ago I read an interesting article titled Denial of Service via Algorithmic Complexity Attacks. When I started working with UniData, It never crossed my mind that U2 had the same class of vulnerabilities, but it does.

If you develop for a U2 system where you cannot afford for malicious internal/external entities to adversely affect system performance, then I highly suggest you read the above linked paper.

I’ll divide this into 3 sections.

Hash file vulnerability
Dynamic Array vulnerability

The first place I’ll draw your attention to is the humble hash file at the core of UniData and UniVerse. As you probably know, each record is placed in a group dependant on the hash value of its record ID, along with the modulo and hashing algorithm of the file. Now, there are 2 hashing algorithms that a hashed file can use. Type 0 or ‘GENERAL’ is the default, general use hashing algorithm, whereas Type 1 or ‘SEQ.NUM’ is an alternative you can specify and is designed to handle sequential keys. The hash file is basically a hash table with chaining.

Let’s assume we’re working at the HackMe Ltd company that has made a public website to integrate with their existing backend system, which is UniData driven. It is decided that people can pick their own usernames when signing up. Since these usernames are unique, they have been used as the record ID.

Ever since he was not hired after interviewing at HackMe Ltd, Harry has wanted to show them up. Knowing that they used UniData on the backend from his Interview (and their job ads), he installed UniData and makes some initial guesses at the modulo for their ‘users’ tables and calculates a few usernames sets for different modulus.

Now, by going to their website and taking timings for the “Check username availability” feature, Harry was able to become reasonably sure of the modulo for the file. Setting up his computer to run all night generating keys that hashed to a single group. Setting up his email server to automatically do a wget on the confirmation URL on received emails (hence getting around the “Confirm email address” emails).

The next day he runs a script to sign-up all the usernames gradually over the day. After they have all been signed up, Harry now simply scripts a few “Check username availability” calls for his last username generated to start his Denial of Service attack. Essentially, he has taken the non-matching lookup performance of the hash file from O(1 + k/n) to O(k) (where k is the number of keys and n is the modulo). Even worse than that, because of how level 1 overflows work, it now requires multiple disk reads as well (UniData only I believe). Continual random access to that file that is heavily weighted in one group is O(k^2)

Now, to give you a visual example, I have run a test on my home machine and produced 2 graphs.

Test specs:

CPU: Core Duo T7250 (2.0GHZ)
OS: Vista SP2 (32-bit)
DB: UniData 7.2 PE (Built 3771)
Hash File: Modulo 4013 – Type 0

The test:
Pre-generate 2 sets of numbers. One is of sequential keys, the other is of keys chosen because they all hash to a single group. Timings are recorded for the total time in milliseconds for:

  1. Write null records for all the keys and
  2. read in all the records.

Separate timings for sequential and chosen keys are taken. The test is repeated for different key counts from 1000 to 59000 in 1000 increments.

DOSAC UniBasic Code

First Graph – Sequential key timings by themselves:

Sequential Timings Only

Second Graph – Chosen key alongside sequential key timings:

Sequential and Chosen Timings

Naturally, timings are rough, but they are accurate enough to paint the picture.

Actually, now that I’ve mentioned painting…

Have you heard of Schlemiel the Painter?

Schlemiel gets a job as a street painter, painting the dotted lines down the middle of the road. On the first day he takes a can of paint out to the road and finishes 300 yards of the road. “That’s pretty good!” says his boss, “you’re a fast worker!” and pays him a kopeck.

The next day Schlemiel only gets 150 yards done. “Well, that’s not nearly as good as yesterday, but you’re still a fast worker. 150 yards is respectable,” and pays him a kopeck.

The next day Schlemiel paints 30 yards of the road. “Only 30!” shouts his boss. “That’s unacceptable! On the first day you did ten times that much work! What’s going on?”

“I can’t help it,” says Schlemiel. “Every day I get farther and farther away from the paint can!”

(Credit: Joel Spolsky, 2001)

When looking at Dynamic arrays in U2, you should see how they can be exactly like a computerised version of Schlemiel the Painter. In fact, a public article on PickWiki pointed this out quite some time ago. UniData is affect more so than UniVerse, in that UniVerse has an internal hint mechanism for attributes. The problem with this is, if an uncontrolled (eg, external) entity has control over the number of items in the dynamic array, you could be vulnerable to a Denial of Service attack. It could even be unintentional.

So, let’s see what all fuss is about. Firstly, a quick recap on the issue with dynamic arrays.

Essentially when doing an operation like “CRT STRING” it has to scan the string character by character counting attribute, multi-value and sub-value marks as it does. If you increment Y or Z (or X in UniData’s case) and do the same operation, it has to re-scan the string from the start all over again. As the number of elements increases, the more noticeable the flaw in this method becomes. In fact, cycling through each element in this manner is an O(k^2) algorithm.

I’ve seen this issue bite and bite hard. It involved 2 slightly broken programs finding just the right (or wrong) timing.

The first program was a record lock monitoring program. It used GETREADU() UniBasic command, after which it looped over every entry and generated a report on all locks over 10 minutes old. This process was automatically scheduled to run at regular intervals. It had been operating for months without issues.

The second program was a once off update program. Basically, it read each record in a large file, locked it then if certain complex conditions were met, it updated an attribute and moved on to the next record. See the problem? It didn’t release a record if it didn’t need updating. The processing was estimated to take about 30 minutes and as it turns out, not many records met the complex conditions.

See the bigger problem now? Yup, that’s right, the dynamic array returned by GETREADU() was astronomical! This resulting in the monitoring program saturating a CPU core. The same core the update program was running on. Uh oh! System performance issues ensured until the culprit was found and dealt with.

So, what do we do about these issues? You want a stable system right? One that is less easy to bring to its knees by malicious users and unfortunate timings of buggy code?

Hashed files:

DO NOT use external input as record keys! Place it in attribute 1, build a D-type dictionary and index it if you need, but not use it as the @ID!

A further option would be to have Hash files and their hashing algorithms updated to be able to deal with this type of malicious edge case. Other languages have (take Perl for example) updated their hash tables now to use hashing algorithms to be seeded at run-time. These means you cannot prepare ‘attack’ keys ahead of time and cannot replicate how the hashing works on another computer, since the hash algorithm will be seeded differently. Obviously, this cannot be done exactly the same with Hash files, is they are a persistent data store. It could however be done on each CREATE.FILE. That way, even if a malicious party can determine the modulo of a file, they be able to duplicate it on their system as each file will be seeded differently. Doing this would bring UniData and UniVerse inline with the security improvements made in other modern stacks.

Dynamic arrays:

This one is simple. Use REMOVE, don’t use simple FOR loops. Think through your data and were it is being sourced from. Is it from external entities? Is it from internal entities whose behaviour cannot be guaranteed to remain within safe bounds? If the answer to either of those questions is even a ‘Maybe’, stay safe and use REMOVE.

The Joel Test – How do you fare?

February 21, 2010 1 comment

Have you ever heard of The Joel Test? If not, go read it now.

Have you read it yet? No? Go on, read it!

Okay, welcome back!

What I’m curious about here is how well the U2 (even the wider MV) community fairs along these lines. Support for source control is a bit behind and from talking with people in some other U2 shops, the pervasion of real modern tools isn’t too crash hot. I’d be really surprised if any of us get greater than 9 but equally surprised if we were below 4 for anyone with more than 4 developers.

So let’s hear it. Give your honest answer (don’t sugar coat it!).

Although your answer will be anonymous (feel free to elaborate your scores in a comment however!), I’ll kick it off by giving the results as I see it for where I am now.

We scored 5 out of 12. Not completely horrible, but not ideal either. I’ll post the blow-by-blow break down in the comment section for those that are interested.

So, how does your establishment fare?

Categories: Community Tags:

U2 Dictionaries [Part 2]

February 10, 2010 1 comment

In the last post I suggested that each piece of information in a file record needed an associated dictionary item.

Some may look at their files and realise it just cannot be done. In that case, “you’re doing it wrong”.

Common case: You have a file that logs transactions of some sort. For each transaction, it just appends it to the record, creating a new attribute.

There are several issues with this style of record structure.

Firstly. You cannot create dictionary items to reference any information (except of course, unless you create subroutine and call it from the dictionary). For example, if each transaction has a time-stamp, you cannot use UniQuery/RetrieVe to select all records with a certain time-stamp.

Secondly, any time you read in the record and need to count how many transactions are in the record, it needs to parse the entire record. Now, if you have each bit of information in a record stored in its own attribute (say time-stamp in , amount in , etc) it would only need to parse the first attribute, potentially cutting down on the CPU expense greatly.

So, if you must store some sort of transaction/log style data in a U2 record, please reconsider the traditional approach of appending the whole transaction to the end and take a more U2 perspective by splitting each bit of information into its own attribute. This way, it will be much easier to use U2’s inbuilt features when manipulating and reporting on your data.

%d bloggers like this: