. | . | . | . | David McCracken |
Elo Software Engineerupdated:2018.02.11 |
Elo makes touch screens, video monitors with integrated touch screens, and complete touch computers primarily for OEM customers. It develops new touch technologies, its own controllers, and drivers for a wide variety of operating systems, including all versions of Windows and Linux. Most of its controllers communicate with a computer via serial (UART) or USB. Some of the devices are HID. Some are vendor-specified bulk, interrupt, or isochronous. Many present two different USB interfaces, one for in-band (touch stream) and the other for out-of-band (control) communication.
I had little experience with touch screens, limited to having implemented an ordinary serial driver for Hitachi’s instruments to support them, but the software manager at Elo hired me into a staff software engineer position anyway for my XP driver expertise. When the interviewer asked whether I knew how to create a mechanism for a driver to asynchronously signal an application, I correctly (I think unexpectedly, given the obscurity of the question) answered that the driver calls ObReferenceObjectByHandle with the handle of an application Event passed through the IoCtl interface. She asked me if the function works, because it didn’t seem to be working for them. In fact, it was an important but unchallenging part of the remote DMA communication driver I had written for Abbott.
During my time at Elo I didn’t just work on XP drivers but on a wide range of topics. I developed drivers, applications, installers, and automation scripts for XP, XP embedded, (Windows) Tablet, and WinCE. I developed several touch inventions, including a means of linearizing the edge response of resistive screens, multitouch in standard resistive screens, and a fail-safe response to screen failure or disconnect. The software manager allowed me considerable freedom working with (OEM) customers and our FAEs when he realized that they and I enjoyed working together.
My first assignment was to design and implement a win32 GUI program to update the firmware of one of Elo’s most popular controllers. This fulfilled an emergency customer request but was mainly intended to introduce me to the Elo environment. My initial investigation revealed that Elo had several generations of the controller and several firmware variations within each generation. Different procedures were needed depending on the combination and our customer had deployed tens of thousands of units of varying vintage. Instead of a simple hard-wired process, my program would interrogate the controller and devise the most expedited procedure, guiding the operator when necessary, for example to change a jumper on the controller.
Elo had developed a configuration language and protocol, called SmartSet, that all of its touch screen controllers recognized. Elo used this for its own programs, such as for calibration and alignment, and made it available to OEM customers for their own programs. Elo also had a simple GUI program for interactive configuration. This left a large usability gap. A person could painstakingly configure a controller interactively or a full-blown program could be developed for a specific, possibly automated, purpose. But there was no simple means of automating new and unique procedures.
I considered writing some TCL scripts as examples for others to imitate but decided that a single program combining interactive operation with the ability to play recorded scripts from a text file would be much more useful. Simply sending SmartSet commands to a controller as fast as it could accept them would not work in all situations because sometimes the response of the controller would determine subsequent commands. To address this, I designed a scripting language with conditional and unconditional delays and branching. It is much simpler than TCL but adequate for this application and aligns well with interactive operations, making it easy to understand.
With my script playing configuration program, Elo engineers and customers could automate even complex control and configuration procedures without help from the software group. But it could not be used to update firmware, as it required a fully operating SmartSet execution engine in the controller. My firmware update program worked because the controller had a small permanent program for this purpose. Other controller types did not have this. Further, it could not be done over USB, which required a much more complex program to function at all.
The firmware group had developed a new updating method that could work over USB but they couldn’t even test it without a sophisticated host program. Superficially, the job is simple. Read the program image file produced by the linker and transmit it to the controller in special SmartSet packets. The real job is not simple. The first controller with this capability used an NXP LPC2142 with an ARM7 core. The linker produced a modified Intel hex32 file. Significant magic was needed just to assemble a coherent image from this. For example, it had different schemes for interpreting addresses depending on whether the block was destined for the quad-aligned program memory or one of variously aligned data areas. Per hex32 protocol, the image was chopped into arbitrary blocks, often breaching alignment requirements. Then the program had to parse the coherent image into pieces compatible with the SmartSet protocol but also without violating the target’s data alignment and programming block size restrictions. Even when all this was done correctly, with the target reprogramming itself, its behavior would change several times during the procedure and the program would have to detect and accommodate this.
The firmware group had assumed that such arbitrary complexity could be realized only by a dedicated program, hard-wired for a specific version of the target firmware. I considered this akin to rebuilding your compiler for every change in the program it compiles. Instead, I designed a meta-language for interpreting program image files and extended my SmartSet scripting language to encompass this more complex host-target interaction. My updated script player could interpret any version of the target program according to the image file schema, which itself could be changed for other file protocols or to correct errors in my original reverse-engineered analysis of the linker’s output.
I realized, even though the firmware group had not, that they would need a specialized means of debugging this procedure. The controller self-program is not only complicated but also changes itself while it executes. I implemented a three-level logging mechanism in the script player. A low level trace of the actual messages between the host and controller, a mid-level trace of operations carried out by the script player, and a high level trace of script commands are correlated. Within minutes of the first deployment, this revealed the cause of a stall. The low level trace showed that the host stopped when it did not receive an ACK from the target. The mid-level trace showed that the script player was trying to send the piece of a code block. The high-level trace revealed that the block was the end of a portion of the target’s program related to communication. The firmware programmer recognized this as one point in the updating process where the target would jump from old to newly downloaded code. Obviously, it jumped prematurely and forgot that it needed to ACK the last message.
Elo actively supported Windows CE but the WinCE releases were increasingly lagging behind XP and Linux. Customers were threatening to stop using Elo for WinCE unless certain critical bugs were corrected. All improvements had been suspended in order to concentrate on fixing bugs but, as the software manager said, every bug fix seemed to take forever. Although I had no WinCE experience, he dismissed the WinCE consultant and asked me to take over, saying that I couldn’t do worse.
The consultant was building programs using the same procedure that Elo suggested to customers. This would seem logical because the WinCE customers were building custom operating systems, an activity more like Elo’s own software group than OEM customers using standard operating systems. This procedure consumed an hour. With a one-hour turn-around time, it was not an exaggeration to say that everything seemed to take forever. I was able to reduce this to 15 minutes with some XP-batch scripts but the process still required interaction and was far above the two-minute limit that I believe is needed for productive software development. However, the customer situation was critical and I fixed the most urgent bugs before tackling the turn-around problem.
Elo’s build procedure was exactly as suggested by Microsoft. The procedure didn’t just compile the program under development but also transformed it into an OS library component, which was added to the BSP (Board Support Package) for the target system (a PC WinCE emulator during program development) from which it could be selected for the subsequent (required) rebuilding of the operating system, which was then loaded into the target. The 15-minute turn-around that I had already achieved was obviously the limit of what automation could do to accelerate the process.
Following Microsoft’s directions, Elo built its WinCE drivers in the Platform Builder IDE but its supporting applications in a C++ compiler provided by Microsoft for embedded systems. There was no way other than through PB (Platform Builder) to get an application into the target. It made no sense to build applications in another venue when the compiler embedded in PB could be directed to build an application, driver, or anything else that could be put on the target system. In successfully testing this, I discovered that an application can be downloaded without going through any of the BSP-related steps supposedly required for drivers. Reverse-engineering PB’s build logs, I devised a procedure where the driver was not even identified as part of the OS but as a user program, treated like an application except under certain relatively rare circumstances. This yielded a driver code turn-around time of one to two minutes.
Developing this procedure consumed two dedicated weeks, during which I refused all WinCE code requests. When I resumed working on requests, my average coding cycle time had dropped from 18 to four minutes, increasing my coding productivity by 450%. Within a week my up front effort was repaid. This was done under WinCE 4.2. My accelerated turn-around time did not change under 5.0 and 6.0 while the standard time increased to 28 and 48 minutes, respectively.
Elo wanted to support three versions of WinCE, 4.2, 5.0, and 6.0 and four CPUs, X86, ARMv4i, MipsII, and MipsIV. Even ignoring the terrible new code cycle time, this was impossible. Elo had different and incompatible code bases for 4. 2 and 5.0. It hadn’t yet developed for 6.0 but neither of the existing code bases would have built under 6.0. Any new feature or bug fix in one of the versions would accrue to the others only by painstaking analysis. Exacerbating this problem, as advised by Microsoft and all of the experts, a different build computer was used for each WinCE version. If a change worked in one version but not another, it was not always easy to determine whether the discrepancy was cause by differences in the development computers or in the OS versions. I merged Elo’s 4.2 and 5.0 code bases, replacing version-dependent code with agnostic code wherever possible and with build directives elsewhere, creating one code base for all three WinCE versions. Ignoring the experts’ advice, I reverse engineered build logs to develop a means of supporting all three OS versions on one build computer.
For each release, Elo provided a separate package for each of the three WinCE versions. Although some of the files were the same (especially after I created the multi-OS-version code base) many support files and all of the compiled programs were unique. A package typically included, for each of the four CPUs, three kernel drivers and one application DLL in debug and retail compilations and three applications in retail. Simply building these 48 files using Elo’s old method would take 16 hours in 4.2, 24 hours in 5.0, and 32 hours in 6.0. To build the files for all three versions of WinCE would have taken 72 hours and constant interaction without human error. Each package also contained 85 build control files and a dozen document files. These were combined with the compiled files in a self-extracting ZIP-exe file. All files were to be checked out from version control as part of the packaging process. This procedure obviously could not be completed in less than two weeks, during which the programmer could do nothing but babysit the process and try very hard to make no mistake.
If nothing else, the compile process had to be automated. This could be done by a complex script with embedded knowledge of the files and directories involved in any given release. But such a script would have to be rewritten for each release. If this were also the approach used to automate other procedures, an extraordinarily complex script would have to be rewritten for each release. With many steps required in several domains, the process would be complex even if the domains were independent, but, in fact, they overlap, causing a combinatorial explosion. To reduce complexity I developed an object-oriented architecture of polymorphic scripts specialized by independent domain-specific and cross-domain declarations. For example, scripts to build the compiled files understand the general purpose of source and destination directories, source file names, the CPU being built, and the version of WinCE being built but they contain no instances of this information. It is either inherited through the environment or read from files. This not only eliminates the combinatorial effect of overlapping domains but also reduces domain-specific complexity. Most changes are effected by editing a simple declaration but even if a script in one domain must be changed, it is unlikely that this will affect the scripts in other domains.
The hundreds of files, of widely varying types, in multiple domains, could be managed by a BLOB (Binary Large Object) database with any specific release as a join of independent domains. However, if this were used for the overall release process, it would have to be used for all of the sub-processes, precluding simpler and more portable schemes for individual domains. I chose instead to organize and name directories and files by a formal scheme that is both easy for humans to grasp and easily parsed. Each distinct field in a name is essentially a data base domain key. To fully exploit this organization, scripts need to parse and synthesize these names, suggesting that they would have to be written in a language with regular expression capability, such as Perl or Python. However, some of the scripts were intended to be shared with customers who might not all have the same facilities in these languages. We could be sure of consistent capability only for XpBat, which has no general string processing. However, it does support parsing and synthesis of directory and file names.
I dedicated a month to designing and implementing this system as a hierarchy of bat and simple text files. At each level, the system is functional and portable, enabling consistent sub-domain automation for Elo and its customers. After the obviously human tasks of writing documentation and deciding what should go into a release, less than five minutes spent editing a couple of text files defines a new release. A single computer, with all three versions of WinCE installed, can build a complete release for all four target CPUs in all three WinCE versions. The top-most script begins by deleting all other files in the build release system, including the scripts, and then retrieving from version control one script that retrieves the rest of the build files. The very simple top-most script is the only file that is not automatically retrieved from version control. The entire process is fully automated all the way to producing, archiving, and distributing the public self-extracting zip files. It executes in two hours.
Elo had independent projects for its WinCE serial and USB drivers. Corrections and improvements were driven by customer requests, which Elo translated into assignments for whichever driver the customer was using, even if unrelated to the link type. Elo essentially provided negative incentive to normalizing the two drivers and they evolved independently. Customers could never be sure that a capability they depended on when using one link would be available for the other. Even if both drivers provided the same capability, they would implement it so differently that exact equivalence did not exist and could not be imposed without breaking other things.
I estimated that link-specific code comprised only 25% of each driver but was enmeshed in other code, preventing sharing of the remaining 75%. I performed microsurgery in both drivers to isolate link-specific code and then conceptually aligned the remainder in both drivers against each other and our public specifications, creating a matrix of existing functionality and the best existing code to implement it. Salvaging adequate existing code and replacing the rest with new code, I created a static library that implemented all existing or promised functionality that could be link-agnostic. I moved link-specific code to link-specific drivers that depended on the library for all link-agnostic functionality. My original estimate was correct. The USB code contains 26% and 23% of the lines and statements compared to the library, while the serial contains 24% and 25%. All future code changes (bug fixes and new functionality) not related to the link immediately accrue to both drivers and 75% of the code base can be ignored when deciding how to implement link-specific requests.
The central feature of my unified driver is a class stack that is link-agnostic up to the last derived class, which is serial or USB. These most-derived classes contain members that are unique to the link type and are, therefore, not naturally polymorphic. Deeper in the class stack are polymorphic methods, implemented as pure virtual, whose purpose is the same for either link but which require link-specific functions. These polymorphic functions cannot be moved up to the link-specific classes because they are invoked by link-agnostic methods.
Multiple inheritance is used at several levels of the driver class stack for two reasons. One is that the driver class encompasses several domains that share nothing with each other. The best containment architecture defines an independent class for each of these domains rather than random members of the driver class but inserting these classes into the driver class stack would breach their containment. They could exist independently of the driver class but that requires a relatively complex “friend” relationship and potentially reduced run-time efficiency. The other reason is that for some of the domains, the ability to instantiate the class independently of the driver class is useful. For example, the serial driver is derived from a serial port class that is instantiated and destroyed (sometimes repeatedly) in the system boot process. Multiple inheritance affords an opportunity to contain the sub-domain within the driver domain without preventing its independence under other circumstances. The “diamond inheritance” caveat about multiple inheritance is irrelevant here, as none of these classes have anything in common.
class EloTch { ... // Pure virtual functions are used here to allow midlevel functions to be fully // shared between USB and serial drivers even though they may need to invoke // transport-aware functions. virtual TCHAR* mDevRegKey( short regNum = -1 ) = 0; virtual int mDoSmartSet( EloTrans *outTrans, EloTrans *inTrans ) = 0; virtual int mGetRegCfg( int item = REGCFG_ALL ) = 0; ... class EloDev : public EloTch, public TimeTouch, public CalStore ... class EloDevSer : public EloDev, public SerialPort { ... // Virtual functions in EloTch. Transport-aware instances. TCHAR* mDevRegKey( short regNum = -1 ); int mDoSmartSet( EloTrans *outTrans, EloTrans *inTrans ); int mGetRegCfg( int item = REGCFG_ALL ); ... class EloDevUsb : public EloDev { ...
The main base class is EloTch. It encompasses 75% of the driver functionality, including in-band touch event processing, out-of-band communication with applications through the IoCtl interface, general system operations, etc. The link-agnostic EloDev class is derived from EloTch, TimeTouch, and CalStore. Its main purpose is to combine the independent link-agnostic domains to avoid duplication in the USB and serial derived classes. The USB and serial driver classes are derived from EloDev. In addition to link-specific functions, they both define link-specific versions of the virtual functions declared in EloTch.
Elo’s original WinCE drivers had an API for out-of-band communication with applications through the IoCtl interface. This was mainly used by Elo’s own calibration program but customers also needed this to do any run-time configuration, for example temporarily disabling the touch screen for cleaning. The APIs were not the same for the serial and USB drivers and only provided very specific functions. But even hundreds of functions couldn’t anticipate all of the reasonable permutations that customers might want. Further, there was no structural correspondence between API function parameters and configuration data and operations in the drivers. Many of the drivers’ API functions served no purpose other than to transfer operating parameters, yet this transfer required parsing and transformation. Any functional change required code changes in the producer and consumer, as expected, but also in these API functions.
I designed a new object-oriented API around two basic concepts. First, functionality is divided into generic primitive real functions and higher level functions, many inline, which invoke the primitives to perform standard operations. Many new API functions can be defined without adding real code or consuming memory. Although the primitive functions are abstruse, being based on structure and data flow rather than purpose, they are published, enabling customers to independently develop any API function they want. Secondly, the separate parameters previously used in API functions are replaced by structures that can pass (by value and reference) between applications and driver through the IoCtl interface. A relatively few generic transfer functions handle most of the primitive operations. The resulting API code in the driver is less than 10% the size of the original while supporting all of the original functionality plus infinite variations. This is implemented in the driver base class, guaranteeing that new capabilities and corrections always apply to both USB and serial drivers.
long mouseLimits[4] = { 82, 110, -82, -220 }; void setMouseLimits( bool limit ) { EloApi::CmdMouseLimits cmd( deviceNumber, gDevList->mDevs[ gSelDevIdx ].mDevName ); for( int idx = 0 ; idx < 4 ; idx++ ) cmd.mCmd.limit.edge[ idx ] = limit ? mouseLimits[ idx ] : -mouseLimits[ idx ]; cmd.setRel(); }
An application can’t simply call a function in the driver but must find the driver, open a handle to it, and perform other housekeeping in addition to setting up and passing parameters. To simplify this, I created two versions of an API library, static for linking into an application and dynamic (DLL) for multiple applications to share. For example, in eloTalk dlgCalib.cpp (calibration dialog) the function setMouseLimits needs only to declare an EloApi::CmdMouseLimits as an automatic to open the IoCtl interface and begin talking to the driver.
EloApi is the API namespace, where the CmdMouseLimits class is derived from EloDevId, an API base class whose constructor and destructor take care of all of the overhead. It also provides some primitive functions to read and write parameters. Thus, the application and specific API driver functions, such as CmdMouseLimits, are responsible only for unique command characteristics.
class CmdMouseLimits : public EloDevId { bool setOrGet( void ); public: IocMouseLimits mIoc; GenConEloDevId( CmdMouseLimits ); bool get( void ) { mIoc.mOp = EloLimitGet; return setOrGet(); } bool setRel( void ) { mIoc.mOp = EloLimitSetRel; return setOrGet(); } bool setAbs( void ) { mIoc.mOp = EloLimitSetAbs; return setOrGet(); } }; // GenConEloDevId is a generic constructor for classes derived from EloDevId. // Wherever this macro is used, an inline constructor for the given class is // defined (not just declared-- see {} at end). This means that that the class // can have no additional initialization beyond the basic devNum and name, which // are passed through to the EloDevId base class constructor. The class T // constructor exists only to provide a means to invoke the EloDevId // constructor. #define GenConEloDevId(T) \ T( UCHAR devNum = 0, TCHAR *name = 0 ) : EloDevId( devNum, name ) {}; // GenSetGet generates a standard setGet prototype and inline set and get // function definitions for the command types that fit this exact pattern. #define GenSetGet \ bool setGet( void ); \ bool set( void ) { mIoc.mOp = IocSet; return setGet(); } \ bool get( void ) { mIoc.mOp = IocGet; return setGet(); } class EloDevId { public: TCHAR mDevName[ MAX_ELO_NAME_LEN + 1 ]; UCHAR mDevNum; DevLink mLink; EloDevId( UCHAR devNum = 0, TCHAR *name = 0 ); void makeNameFromNum( void ); void setDevice( UCHAR devNum, TCHAR *name ); ELO_ERR openDevice( void ); };
The user manual that I wrote for customers provides a detailed description of the API and its use.
All of Elo’s USB touch screen controllers are HID for in-band (touch event) communication but vendor-specified for out-of-band configuration and control. It is not difficult to support this in the layered WDM driver used in all versions of Windows after NT but it appears to be impossible in WinCE. At our request, Microsoft technical support investigated this and concluded that it could not be done. However, our customers were demanding the same configuration and control capabilities in WinCE that they had for the same touchscreens under other operating systems.
Despite Microsoft’s assessment, I developed a solution. Two pieces of information are needed, the USB handle of the device and a pointer to the IssueVendorTransfer function provided by the USB bus manager. The device doesn’t lose its underlying USB identification just because the HID-USB driver takes responsibility for it but there is no means to get this information knowing only the HID handle.
If the INF for a device specifies it as both USB and HID, the USB bus manager first invokes its USB driver. If this accepts the device, the manager does not look for another driver. If the driver rejects it, the manager unloads the USB driver and then asks the OS-native USB-HID driver if it will take the device. The USB-HID driver accepts the device and associates it with any HID driver specified in the INF. The same driver can be specified as both a HID and a USB driver for the device, but it is unloaded if it rejects the USB offer and any information it might ascertain does not persist. However, it can store the required USB information in the registry, to be retrieved when the driver is reloaded as a HID driver for the device.
// ----------------------------------------------------------------- // Function: USBDeviceAttach // Purpose: Called by USBD.dll to determine if we will accept // responsibility for this device as a (non-HID) USB driver. // Returns: TRUE but with *fAcceptControl FALSE if we want the // UsbHid driver to provide the required HID functions. // Arguments: // - USB_HANDLE hDevice is this device's USB handle. // - LPCUSB_FUNCS lpUsbFuncs points to the USB function table (shared // by all USB devices). // .................................................................. extern "C" BOOL USBDeviceAttach( USB_HANDLE hDevice, LPCUSB_FUNCS lpUsbFuncs, LPCUSB_INTERFACE lpInterface, LPCWSTR szUniqueDriverId, LPBOOL fAcceptControl, LPCUSB_DRIVER_SETTINGS lpDriverSettings, DWORD dwUnused ) { EloRegKey rk( REG_USB_KEY ); RegSetValueEx( rk.key, REG_USB_DEV, NULL, REG_DWORD, (UCHAR*)& hDevice, sizeof( USB_HANDLE )); RegSetValueEx( rk.key, REG_USB_FUNCS, NULL, REG_DWORD, (UCHAR*)& lpUsbFuncs, sizeof( LPCUSB_FUNCS )); *fAcceptControl = FALSE; return TRUE; }
The USB bus driver first invokes my driver as a potential USB driver for the device, calling its published USBDeviceAttach function. Typically, this function just determines whether the driver wants to own the device but my version saves the device’s USB handle and the USB function table pointer in the registry. It returns to the bus driver saying that it doesn’t want the device. When the USB-HID driver reloads the driver, now as a HID driver for the device, it doesn’t call this function again but instead the HIDDeviceAttach function, which copies the device handle and USB function table pointer from the Registry into driver memory to be used to call the IssueVendorTransfer function to effect USB vendor-specified communication.
For its new acoustic touch screen Elo had developed a WDM isochronous USB driver for XP. The driver worked but often crashed the OS during installation. Early release customers (and Elo’s own production team) were unhappy that they had to follow unusually strict installation and configuration rules to avoid crashing. The programmers responsible for the driver couldn’t fix this problem and I was asked to help. I found and fixed a half dozen conceptual and coding errors and apparently solved the problem. However, as subsequent OS updates added new USB power modes, crashing resumed and become increasingly frequent.
In my first round of corrections I discovered a truly awful WDM quirk. For each device the OS creates a generic PDO (Physical Device Object) linked to a device extension defined by the driver but allocated by the OS. It is difficult to manage a device without at least some rudimentary instance information in the device extension. If a device attached to USB or any other type of plug-and-play bus is detached, the OS immediately frees its device extension. If the driver calls an OS function and the device is detached before returning, the driver cannot access the device extension without crashing. WDM requires the driver to make many calls into the OS, most of which are expected to be compiled out of existence or to a few CPU instructions that will not be interrupted by device detachment. But we are not supposed to know which ones might be vulnerable and only by checking after every call, including the ones that may compile to nothing, can the driver be sure of avoiding a memory fault.
The PDO doesn’t tell when the device extension has disappeared and the only other logical place for this information would be the device extension itself. In the first round I developed a cheap fix by intercepting the device detach event and setting a flag that the driver could test. However, as power modes proliferated both the number of vulnerable OS functions and the number of event types causing the device extension to disappear increased. The problem became too complex for my cheap fix. To solve it more generally, I implemented a driver framework library. My framework tracks all USB events that might cause a device extension to be freed and presents to the device driver safe and relatively efficient replacements for native WDM functions.
Eventually, Microsoft addressed the problem with a more radical framework, which essentially turns the WDM inside out. Instead of the device driver calling into the OS, the framework calls functions in the device driver. This is the best solution but it requires privileged information that might change with any OS update.
The first generation of resistive touchscreens uses a single continuous electrode (deposited on glass) on each edge. By design, the electrodes are much more conductive than the resistive film used to detect finger position. The position-to-resistance mapping in one axis is influenced by the low resistance electrodes of the other axis, producing a very pronounced pincushion response. The second generation uses interleaved segments instead of continuous electrodes, eliminating the large pincushion nonlinearity but creating a series of smaller scallops.
Both of the nonlinear response patterns causes targeting problems. For most screen positions, having multiple small scallops near the edges is better than one large pincushion. However, for one important class of targets, it is worse. GUI frameworks, notably Windows, tend to place very important targets in the corners. The scallop response moves corner targets’ apparent center position nearly off the screen.
Any touchscreen with continuous electrodes has pincushion nonlinearity, while any with segmented has scalloping. But the exact response depends on the size of the screen and resistance of the sensing layer (typically indium tin oxide) and electrodes. With consistent manufacturing a given model will have a consistent response but it will be different from all other models and difficult to theoretically predict. Thus, although the touchscreen controller or host device driver program could correct the nonlinearity for any known screen, they can’t anticipate every model of screen. The controllers are very cost sensitive and can’t afford the complexity of a general linearity correction mechanism. The host device driver can afford complexity during configuration but needs to be efficient during normal operation.
I developed a general solution, which linearizes the response of any type (not only resistive) of screen by adjusting the apparent location of every point on the screen according to a compensation table. A canonical table, i.e. for every point, would consume too much precious kernel memory. Instead, it contains only major inflection points between which fast linear interpolation provides the desired precision and resolution.
The compensation table is unique for each model of touchscreen panel. Elo could provide this to the OEM customer but that would impose an additional burden and opportunity for mistakes on both Elo and the customer. Instead, I developed a method for quickly creating a table for any screen. Screen alignment is already routinely done for each screen using a three-point method. I added an optional edge calibration procedure, where the user slides their finger along each edge using, but not depending on, the bezel for a guide. My (application) program continuously captures response data and then uses this to compute an optimized inflection table. The number of inflection points is not fixed. A pincushion screen might need only four points while a scallop might need 30 or more.
This is a system-wide mechanism, involving several application level programs, the driver, and user documentation. Because I was responsible for all Windows CE development, it was easiest for me to release this first for Windows CE although it is in no way specific to WinCE. When a system boots up for the first time, the driver uses a default compensation table for each touchscreen. This may be provided by Elo if the screen model is known or the OEM customer can perform a full calibration (three-point plus edge) on their first article and embed the results in their system image. My OEM WinCE program automates this for the customer. My WinCE control panel, which Elo provides for OEM customers to include in their product, has an OEM option to hide or allow the end user to do edge calibration independently of the standard three point calibration, which usually is exposed to end users.