October/November 2018 – The transitory nature of digital design

0

All electronic designs have a time dependency that is hard to escape, regardless of the analog or digital nature of the design. Parts are abandoned, techniques and markets change, and the costs eventually become too high, even if construction was still possible. Every design has its optimum period where it fits right into the balance of materials, techniques, cost, and customer desires, and then fades from the market as these factors come out of attractive alignment.

Curiously, digital designs, especially those with a software kernel, have this effect in a very accelerated timescale and with much higher risk factors. It is the very unfortunate merging of technical and market effects that puts a strong negative pressure on designs intended for long life, especially for critical areas such as avionics, energy infrastructure and utilities. Here, the desired time horizon for useful life is rarely less than 20 years, and based on previous generations of technology, it is implicitly expected to operate for as long as 40 to 50 years. Unfortunately for everyone, that kind of lifespan is simply not possible with today’s digital technology.

Most everyone remembers Moore’s Law (Gordon Moore, co-founder of Fairchild Semiconductor and Intel) in his 1965 white paper which said component/transistor density would double every 2 years and continue into the future. Some use the fastest figure of 18 months because another Intel executive (David House) said that chip performance would double over that period due to the combined effect of speed improvements and density. Whichever criterion you use, this general observation has proven to be true with sometimes severe consequences in the market.

This rush to complexity began with the revolutionary Intel 4004 microprocessor in 1971 (running at 0.74 MHz, often without external RAM). It had 2,300 transistors. For comparison, the Intel Xeon E5 processor has 2.27 billion, and my station has dual processors and 16 GB of RAM. Clearly, Moore’s Law works pretty much as described, but it has some serious ramifications that aren’t so obvious. For example, the 2.4GHz 16-core processor die in my Lenovo workstation is already outdated and discontinued. Basically, I’m working on a very advanced but indisputable dinosaur.

The rapid density/performance growth curve of digital parts tends to make higher performance progressively less expensive, but it also inevitably creates fewer parts with a very short market life cycle.

Let’s face it, there isn’t much demand for 4K DRAMs or 80386 microprocessors anymore, let alone the old simple RTL, DTL, or even TTL logic circuits. Even the many parts of the 8080 family for embedded systems are largely gone. Additionally, the system-level hardware that grew alongside these state-sold devices like CGA or VGA monitors, floppy disk drives, and cartridge tape backup that were once digital “kung fu” so serious are also only memories. In fact, these days it’s hard to find a new computer of any type with any floppy drive. Many interfaces such as serial ports, VGA video ports and parallel ports are also gone. For the most part, this can be seen as progress, but as we’ll see, there’s a hidden and annoying problem.

The digital design implemented at the discrete gate is not so volatile and has no intrinsic hidden code component, but this robust technique has also been seriously displaced by the use of programmable logic under many forms, such as an array of field-programmable gates where a logic definition is used to configure a more generic general-purpose device. These highly proprietary devices soon fell prey to the same race for increasing complexity in a very competitive market, and many designs and architectures were abandoned in a very short time, again resulting in extremely long lifespans. short and a confusing product and programming landscape. Fortunately, one can still buy individual HCMOS gates and other similar parts today, but this 74HC00 quad NAND gate may cost more than a low-end microprocessor today and probably only exists as a surface mount device.

Interestingly, there is no software corollary to Moore’s Law, and software development has not kept up particularly well with hardware. In fact, rather than building well-established, validated, and proven core languages ​​and tools, we’ve instead seen a proliferation of similar but frustratingly incompatible tools with endless (usually code-incompatible) revisions, and many have were quickly abandoned along the way, leaving support systems built on them in grave danger.

In addition to program language churning (usually to generate new software sales), we have seen the demise of many standard operating systems and the loss of tool platforms for software development and device programming. Even hardware interfaces with device programmers (serial and parallel ports) are gone. As a latest unfortunate development, we have also seen the erosion of software programming education due to the dangerously short lifespan of systems and languages ​​and the continued need for programmers. Since all programming relies on a platform of the underlying host and target machines, which continue to rapidly evolve, deep and fundamental knowledge of them is very difficult to obtain today.

The actual skill level of this programming group is deteriorating not so much because of their personal shortcomings, but rather because the industry is becoming more fractured and inconsistent as all aspects of hardware, operating and programming languages ​​become transient and continue to expire. . Additionally, a high-quality programmer (in a program of four or more years with valuable work experience) will also have learned quality control, human interface, logic, task analysis, psychology, and a host of feel-good fine art related topics and thoughtful design of engaging systems.

This well-integrated approach is now largely abandoned in favor of fast code competence in a narrow domain, without the essential philosophical structure that leads to useful, high-quality work and a strong sense of responsibility for known good results. As a result, virtually everyone’s knowledge of software and applications has become shallow and narrow. This is not a review, just an observation of how market forces create trends. The concept of quality took a serious hit in the process. Everyone thinks he’s going to “fix it on the pitch”, a truly toxic plan for progress. And something you really don’t want to see in flight.

I think it’s hard to find anyone who can really claim to understand how Windows works, from the graphical user interface down to the hardware abstraction layer, and the inner workings of all the dynamic link libraries in the system and hardware drivers. Or, for that matter, the full functionality of any programming language or operating system and the differences between each version, compiler, and associated libraries. Once you pile applications in different languages ​​together with their OS support files with all their code dependencies, the certainty of known good operation under all conditions quickly vanishes, making it very difficult to validation and fly-worthy software.

If you wanted to be a programmer today, you could self-taught through online services or choose a two-year tech program to start, but which language is best to focus on? C, C+, C++, VisualC, TinyC, Csharp, or maybe Go, Python, Java, Javascript, Perl, Fortran, Ada, Lisp, Pascal, Ruby on Rails, or maybe an older standard like Basic or the many types of CPU-specific machine languages. Then you will need to choose a target microprocessor family, plus an integrated development environment or software tool suite and a host machine and its operating system both for your development work, and often another for your design target. Maintaining this functional and intact code development environment over time is very difficult. In many circumstances, this becomes impossible.

These factors complicate the required support and error correction of systems in the field over the years and can make it impossible to continue production as key components fall out of the supply chain and products quickly become obsolete.

It was only recently that the intellectual property (IP) of microprocessors truly separated from microprocessor hardware, with the advent of ARM. The concept is to provide optimized processor technology that all chipmakers can license with compatible code structures and upward mobility paths. They provide the tested IP, the vendors provide the differentiated silicon with the features they deem best. This helps offset the nightmare of orphaned and incompatible parts from vendors without a second source and finally gives some longevity and “learnability” to key concepts in digital design using microprocessors. The advent of a huge common interoperable technology pool with wide access like ARM is one of the most important advances in digital design and of enormous benefit to all parties involved.

These are all issues leading to code creation and viable hardware. There’s another aspect that can be tricky, and that’s when the code itself becomes volatile and short-lived, and suddenly the process escalates quickly.

In the next issue of Avionics International, we will cover the transition to hardware and all its aspects. AHV

Share.

Comments are closed.