. . . .

Abbott CD3200

 

updated:2016.07.13

Prev   Next   Site Map   Home   Prev   Next
Text Size
Please reload page for style change

ASSIGNMENT

Abbott had two divergent instrument development branches. The older one was commercially successful and flexible but rather haphazard and poorly documented. The newer claimed to be a platform for the entire product line but was unproven. I was initially hired to study the requirements for a new mid-range instrument (initially called the CD2500) and to determine which of the two branches or an entirely new design should be used as the basis for the new instrument. I concluded that both existing development branches contained many design flaws. However, the project manager and I both felt that neither of us knew enough about hematology to develop an entirely new instrument and, in any case, didn’t have enough time to do it. We decided to base the instrument on the existing CD3000, which had a generally good architecture despite its flaws. I was hired to lead this effort.

The new instrument was to provide performance similar to the CD3000 but include an integral sample loader like the instruments in the other development branch. Abbott had hoped that the project could be realized by working only on the instrument controller program (Abbott calls this the data station) but it was clear that the loader could not be supported without some work being done on the analyzer hardware and software. Consequently, my responsibilities expanded to include system architecture design.

VERSION CONTROL

I initially interviewed all of the programmers that we inherited from other 3000-series projects and found that each one had their own unique make file for building the controller program. Some of them had several different make files, depending on which part they were working on. Polymake PVCS was used for version control but, here too, each programmer had their own policies and all complained that file locking by others was hindering their work.

Before attempting any new work, I developed policies to cover every aspect of program development, version control, and deployment. Since no task was left without a policy, the policy document was quite long and clearly unenforceable by “bully pulpit”. In any case, I didn’t want the programmers to waste their time learning my rules. Instead, I developed one make file to serve the entire program development cycle, using Polymake at first and later the more powerful Opus Make as a job control language.

A traditional make file, including environment information as well as build rules, would have failed in this situation. Some of the programmers worked at home, some always on the network using only tools from the network, while others used a mix of local and network tools. Sometimes a programmer would be simply fixing one file but other times making cross-library modifications that would break the build at intermediate stages. To accommodate these legitimate variations, I separated the make into three parts. The largest part, the make file itself, contained only rules, with all environment references to macro-ized locations. The other two parts defined the environments for tools and for the program under development. Each of these was a macro file included in the make. Several standard versions of these macro files covered nearly all of the programmers’ variations but they were free to create more personalized versions. I allowed no one else to modify the make file and encountered no resistance to this edict. Most of them were happy with the standard environments that I defined for them and never modified their macro files.

To correct the file locking problem, my make checked out files without locking and allowed over-puts but with a warning and option to abort. When a programmer attempted to put back a file that had been over-put, the make blocked the put, explained the problem, and automatically invoked an interactive merge program.

The previous deployment process had been arduous. The version control manager had to stop all programmer activity (by exhortation) for several hours while she added version information to a number of the files and then capture and copy the set. More than once, programs were deployed with erroneous version information. My make automated this process, reducing it to less than five minutes while guaranteeing correctness without special operator knowledge.

ROBOTIC CONTROL

The project inherited sample loader mechanics from one instrument branch and a script language from the other. We contracted out the job to design a controller for the loader. We could have specified the controller to be entirely programmed in C but I felt that it made more sense to have all instrument hardware controlled by scripts. The contractor was instructed to implement an interpreter for a subset of the existing language but instead developed a much lower level instruction set. Our compiler writer refused to add these instructions, claiming that script writers would be unable to use them. His objections were legitimate. In particular, there were no commands for manipulating single-bit I/O devices individually. Rather, the script writer was required to write a series of Boolean operations. I resolved this problem by writing a new optimizing compiler, which translated higher-level script commands into sequences of the controller’s low level instructions.

EMBEDDED SYSTEM DESIGN

The analyzer system that we inherited contained obsolete components. The Abbott ASIC group had offered to design replacements but I rejected this approach. Instead, I developed functional requirements by examining how these components were actually used. We replaced an MC68000 with MC68340, an Intel 8041 with a main CPU ISR, random logic with an FPGA, EPROM with flash, and all through-hole components with surface mount. This was significantly more efficient than trying to retain more of the original hardware but left us with a problem; the only way to program the CPU’s surface mount flash BIOS memory was through a bed-of-nails tester, which required facilities that we would not normally have until we were nearing mass production. Further, these facilities would be located in another State from where programmers were working. I wanted on-site (preferably on each programmer’s desk) programming capability. The MC68340 has a built-in debugging capability called BDM (Background Debug Mode). With the proper interface between the CPU and flash, a BDM debugger could program flash through the CPU. However, no available debuggers could meet the flash timing. As I describe in my “Embedded Systems Programming” article Programming Flash via Background Debug Mode, I invented a way to get around this problem as well as to have full BDM debugging capability even when working on the flash-based program, which is otherwise not possible.

ALGORITHM VISUALIZER

Histograms play a significant role in both analysis and presentation in hematology systems and it is important that their mathematical manipulation be well understood. My review of the inherited program revealed an excessive number of similar algorithms, particularly for smoothing histograms, suggesting that the programmers were not clear about the effect of these algorithms. Rather than simply correcting the problems, I wrote a program that could take data from a variety of sources, apply user-selected algorithms to it, and display the result as a histogram. Most coefficients could be modified by scrollbar with synchronous plotting to show the effect immediately. I reduced some of the common algorithms to parameterized generic forms. For example, instead of a collection of different polynomial smoothers, I created a generic polynomial filter with configurable tap positions and coefficients.

ALGORITHM TRANSFORMATION

The inherited program used very deep decision trees and fixed histogram positions to analyze the collected data. I changed this to deterministic mathematical functions in which every relevant parameter, each weighted by a configurable coefficient, calculated the result. This approach drastically simplified the code, for example transforming one 2000-statement, 16-level deep sequence of nested functions into one statement with a small table. Additionally, making decisions through coefficients instead of control flow allowed me to treat the decision process as just another algorithm whose coefficients could be manipulated via scroll bars in the visualizer. Thousands of hours of program development were thus reduced to seconds on the visualizer. The visualizer could flip through sample data at nearly persistence of vision speeds, enabling developers to review the effect of a coefficient change on hundreds of samples in seconds.

WEB SERVER

Access to the instrument through a network had been requested, although it was not a formal requirement. There were no suggestions as to what kind of program would be on the other end. I decided that it would be dangerous to expose the instrument as a peer server and instead implemented an http/html (version 1.0-- this was 1998) web server, enabling me to fully control what could be accessed while lowering the remote coordination cost, since any browser could be used. My program built web pages on the fly by combining static html fragments with dynamic strings (by sprintf). It served these in response to client GET requests. It also served files as MIME type application/octet-stream. I didn’t use ftp for file transfer because the server provided virtual as well as real files, both from virtual directories.


Prev   Next   Site Map   Home   Prev   Next   Top   Valid HTML   Valid CSS