PROJECT DEC-TET
     White Paper Date: October 29, 2002 and after.
     written by: Chair

 
American Computer Scientists Association Inc.
 6 Commerce Dr
 Cranford, NJ 07016
 908-931-1390
 http://www.acsa.net


Purpose

Project Dec-Tet has been chartered to analyze the potential advantage of replacing the venerable Byte of 8 Bits used in modern computers, with an expanded Byte of 10 Bits, the Dec-Tet, in future computation, and to ultimately build a model computer which incorporates the features of today's  Byte of 8 Bit computers functionally, replacing all usage of the 8 Bit Byte and multiples thereof, with 10 Bit Bytes and multiples thereof.  The consequences, a machine of larger character, graphical, integer, addressing, bus, scientific and other calculation capability by 25%, does not seem significant at first, until one realizes that in Binary Arithmetic, one can address or reference a number 4 TIMES AS LARGE using only 25% more bits in the fundamental architecture of the venerable Byte.  

Furthermore, due to the strange relationship between Binary and Decimal Arithmetic, by using an 10 Bit Byte, correlations of 0 to 999 can easily be channeled on a one to one basis in 10 Bit Binary Arithmetic, allowing the classic 3 digit Decimal-System Placeholder to be represented by a single Byte of 10 bits, wasting only 24 values going unused, out of 1024, a loss of only 2.34375 percent of the possible representations that a 10 Bit Byte could possibly represent, also providing an overflow barrier that can be used by Registers to hold "carry" values which typically would emerge as an additive value of 1 to 9, during arithmetic.  That would leave only 15 unused representations, 1.46 per cent.

The Chairman of the Association began proposing 10Bit Byte Computation in the summer of 1973, after lengthy consideration of the weaknesses in early 16 and 24 bit word length 8Bit Byte computation.  This legendary value proposition to the Computer Industry is being re-reviewed and researched  here after a very serious examination of the capabilities of today's most common microprocessors, the Pentium 3 and 4, the AMD Athlon, the Via CIII, the Clipper, the IBM 370, the Itanium, the Sparc, the SGI MIPS, the IBM Power PC manufactured by Motorola, the IBM RS,

We believe all of the products we'd evaluated using simulation software, are suffering from limits found in the original adaptation of the 8 Bit Byte, called "8 Bit Cubism", the tendency for 8-Bit multiple architectures to achieve the limits of 8-Bit Cubist memory and bus designs that inadequately provide for a broad enough addressibility, symbolic representability and transfer speed, when viewed from the "Particle Size" perspective.  The 8-Bit Particle is only 25% as competent as the 10 Bit Particle at only a 12.5% speed deceleration, while the 12 Bit Particle suffers from being 4 Times as competent as the 8 Bit Particle, but too large for most smaller granular sub-morphemic structures within both the hardware circuitry and the software.

For example, a 12 Bit Particle Granularity for a single Byte, would maintain a character in one Particle in a set of 4096, which while useful for one or two languages, such as Chinese, would be too large to use for languages with only 26 basic characters.  Yet 1024 Character Sets would be rich in stylization, could even be divided into emphasized, italic, bold and italic bold sub-groupings, within the symbol set, still using only a single 10 Bit Byte to call out their characters.

Below you will find the beginnings of a disclosure of the justification for adding 2 Bits to ever Byte, per 8 Bits to every address, per 8 Lines per every bus, and so on.  This will be disclosed in drafts in accordance with ACSA policy, the finished document will be presented to appropriate Standards Definition Groups and Manufacturers for further study.

Summary and Background

The original ideas around which today's computers function, originated in the work of Bletchley Park in Cambridge, England, in a project called: Project MK Ultra in 1941.  Two segments of that project produced relevant spin-offs that made there way into American development of advanced computers in 1945-1950. One of the key ideas, extending the Six Bit communications language commonly in use, to a seven and an eight bit module called a Byte, originated somewhere in MK Ultra in 1942.

The first part of the computational history of MK Ultra began with Algorithmics, originated among the Code Breakers, mathematicians who devised routine means by which to derive decoded documents from encrypted originals. This part, badly taxed by the advent of Enigma, a 3 roller wheel Algorithmic Encoder used for portable and low key correspondences by the Nazi High Command, required automation, and one Alan Turing, a brilliant engineer and scientist, devised a means using mechanical acceleration of the decoding process using "Gang Series" Mechanical Calculation, derived from the early work of Babbage and other mechanical wheel calculators, greatly reduced the load of decrypting Enigma transmissions, helped by the British Military Intelligence securing one Enigma Encoder through intelligence subterfuge.

Knowing how Enigma worked, helped the Algorithmic routines of the MK Ultra Code Breakers to adapt the work of Turing to reducing the process of nightmarishly reproducing, in reverse, how Enigma encrypted transmissions.

Then, along came the Lorenz Cipher machine, a highly automated edition that read IBM punch cards and paper tapes, and used high speed roller wheels some 12 in number, and even Alan Turing was unable to come up with a solution, without an example of a Lorenz machine to work from.  The permutations were billions of times more complicated.

The second part of MK Ultra, Computeristics, came about when the Turing Mechanical Enigma decryption tool failed to be able to decode Lorenz transmissions.

As, upon it's declassification by British Intelligence a few decades ago, described today in the Bletchley Park Museum (but omitted in every Documentary about MK Ultra, most of whom were authored prior to declassification of one of the two greatest secrets of WWII, the other being the Atom Bomb) a highly automated paper tape controlled, programmable electronic computer that vastly surpassed the so-called Turing Mechanical Decryptor, was created tapping the knowledge of Engineer Sir Thomas Flowers, an employee of the British Telephone Establishment, who had experimented using such machines in the 1930s to automate the British Telephones unsuccessfully (largely due to wire transmission problems).

Tommy Flowers unique contribution, the world's first wholly programmable electronic computer, was completed during late 1941, several years before the earliest work by Drs Eckart and Mauckley in the USA, but since it was not declassified until 1979, little was known outside of the Intelligence and Defense communities in the USA and England, both of whom tapped it's secrets progressively feeding them to the infant computer industry after World War II.

Sir Flowers was interviewed, before his passing, by the ACSA.  He was a very mild mannered man whose sole motive for building the first computer was to save lives and Allied Shipping from Nazi U-Boat Wolf packs, which he knew had been used as a model by researchers following World War II despite that news of it's existence and decoding capabilities were kept a secret until the late 1970's since they were still being used for three decades after WWII to decode international encryptions, particularly those of the Russians, who had secured Lorenz ciphers from Nazi Germany during occupation.

Meanwhile, one question Tommy answered this writer personally, has since caught our eyes

Q) Sir Flowers (-- please, call me Tom), ok, Mr. Flowers - how is it that so-called 7 Bit and 8 Bit codes were chosen, rather than staying with the 64 character set 6 bits in width?

A) Flowers: it was quite simple, really, originally since the German Chancellery was transmitting in 6-level code, we thought a 6 bit data module would be right.  But we discovered that to vary the intensity of the teletypewriters being used to encode and then transmit with Lorenz, four different style codes were added randomly by the Lorenz machine to every transmitted character.  We decided that we had enough room on the paper tape of the time, to include those style codes along with each data character, and our custom transmission reader and tape punch, punched them and we processed them in groupings of eight digital binary bits, which today one might call a Byte.  All of this was done at high speed, with our processor able to read and continuously reread those tapes at nearly 70 KPH.

Today, all computers build their languages, memory, data registers, addressing, and storage, on the basis of this original character length, 8 bits, a so-called Byte or what we call an OCTET, and multiples thereof. It and variations and derivatives of it have controlled, and governed almost every other characteristic of computation ever since.  Even Moore's theory reflects this, as do the work of the great Microprocessor Development Companies, like Intel, Motorola, IBM, AMD, TI, National Semi, Intergraph and so on.

We have wondered for some time, was 8 bits the correct length for such a data module.

The American Computer Scientists Association has commissioned the so-called Project DEC-TET for one reason alone.

It has, after much mathematical analysis, much research covering nearly a decade, and a lot of historical analysis, occurred to us that Computer's whose Binary Byte data module system was based on a DEC-TET rather than an OC-TET, e.g.- 10 Bits of information, rather than 8, might have unanticipated potential benefits that the compacted 8 Bit Byte might just be interfering with.

One might wonder: why not 12 or 16 bits, but again, we have looked at the Asymmetricity of 10 Bits and determined that almost every aspect of computation, except for familiarity with 8 bit-ness, gains a benefit that any other fundamental Byte Size defeats because it is either odd-numbered or an even order multiple of the numbers 2, 4 or 8.

IMMEDIATE BENEFITS

Long term studies have suggested that a 10 Bit Byte would provide many, many benefits.  For example (to name but a few), Decimal arithmetic could be done using a more allied form of register, a single 10bit Byte would be usable for decimal representations of up to 999, allowing billion :

a) If the basic Byte were 10 bits long, associated sAscii (special Ascii, a name we've derived for the first generation of 10 bit DEC-TET based Character sets) Character Sets could contain 1024 symbols assigned to locations in a single Byte, far more conducive to Rich Character Content, and inclusive of far more Typewriting and Special Symbols, without need for a Second Byte to do so.

b) If the basic Byte were 10 bits long, the Short and Very Short Integer form used for Integer Arithmetic by most Microprocessors doing routine housekeeping would be Mod 1024 for a single byte (0-1023) of 10 bits, and Mod 1,048,576 for a single byte (0-1048575) of 10 bits length.

c) As a result, even Decimal integers could be Packed into one byte (for 0-999) and two bytes (for 0-999999) with unused combinations for natural error correction, without much ado.

d) Use of Two and Three Byte "program counters" could result in a short "fixed address" program of  0 to 1 Meg Words (we'll get to words in a moment) using a simple to manage 2-Byte Program Counter (20 bits) and 0 to 1 Billion Words in length "long program" with a not very hard to manage 3-Byte Program Counter (30 bits) long enough to address all but the most complex single programs and address spaces imaginable.

e) The most common word lengths, 2 Bytes (Short Word) and 4 Bytes (Long Word), would multiply these basic address spaces to 2 or 4 Million bytes (in the case of the short "fixed address" program model) and 2 to 4 Billion bytes (in the case of the long program model).

f) In addition, overall address ability of a single address space in Hardware or Virtual Memory system of 4 Bytes (or 40 bits) would yield a 1 Tera Word (2 or 4 Terabytes) "Virtual Addressing" model for individual programs, and in the case of large Network Address Spaces, a "Network Virtual Address ability" model of 4 Bytes Virtual and 4 Bytes Network Space, providing up to 1 Trillion Nodes each with a 1 Tera Word Network Address ability.  Since Global Addresses use Two concatenated (interconnected) Network Addresses, that would expand to an unreal 8 Bytes Network space (1 Trillion SQUARED nodes) each with a 1 Tera Word Network Address ability.

These all using simple 10 bit Bytes as the basis for addressing, registers, program counters and memory cells.

SCIENTIFIC CALCULATIONS

Furthermore, Scientific Calculations would inevitably improve.

g) For example, 8 Byte and 16 Byte Scientific Arithmetic is, today, 64 and 128 Bits of accuracy.  In the 10 Bit system, this would shift to 80 and 160 Bits of accuracy, something that would increase the capabilities of modern floating point units which, using modern speeds, would be able to leverage cascades and parallelism pipelining to greatly increase the power of high end Scientific calculations.  And 32 Byte Super Scientific Calculations, formerly only 256 Bits wide, would be 320 Bits wide, 64 Bit gain on the same platform of methodology extended from the lowly Octet to the only slightly more broad DEC-TET.

In some cases, things today being wastefully done on 16 bits could be done on a single Byte, if they never needed more than 1024 permutations.

And Graphical Memory Planes would be greatly enhanced, since they would no longer be limited to 256 Color combinations per byte, but could point to palettes or represent directly values of up to 1024 colors. A simply 0-7x0-7x0-7 color scheme for "simple graphics" that used only 9 bits plus one bit to indicate which palette (a or b) would provide an index into a simple graphics palette system that could store values in two 8 x 8 x 8 color cube palette, for a maximum of 512 colors per palette and a choice of two colors, for a thousand colors, each of which would be defined in the palette in 30 or 40 Bit True Color values.  This would greatly enhance the ability to use a Graphics system while achieving great speed by defining pixels with only a single 10 Bit Byte.
 
BUS SUPPORT
 
Supporting such a system would not be difficult, maintaining 10 Bit, 20 Bit, 30 Bit, 40 Bit Byte-compatible Data and Busses would eliminate the need in most cases to maintain a 64 Bit Bus, thereby compacting expanding architectural needs by a number of clock cycles.  At today's Quad Pumping speeds of 133 MHz, a 40 Bit wide 10bByte-compatible data bus running QP technology, would achieve 21.28 Gigabits Per Second throughput, or if only transferring a stream of individual 10 Bit bytes one per 1/4 cycle, 5.32 Gigabits Per Second, sufficient to easily support two Full Duplex Gigabit Data Networks with plenty of throughput left in the headroom.  For expanded operations, a 50 Bit wide 10bByte-compatible data bus would be preferable to a 64 Bit one, even if a bit slower, because of the lower connection count, achieving nearly the same capability. 

A 50 Bit wide 10bByte-compatible data or addressing bus running Quad Pumped at 133MHz would achieve 26.6 Gigabits Per Second throughput, and would have address-ability of 2**50th, (0 to 1 with 30x 9's after it) which to me sounds like every particle of matter on Planet Earth, perhaps the Solar system.

BACKWARDS COMPATIBILITY

Each Register in such an Architecture could have an associated mode state that was readable and could be used for numeric compatibility in some prior form of arithmetic calculation, using 8bByte compatibility mode.  Such would be very simple to implement, for integer handling purposes only it would not require much by way of additional real estate.

VIRTUAL ADDRESSING

The 10bByte lends itself well to Virtual Architecture.  A Single 10bByte "page number" could address 1024 pages of dynamic storage, allowing a Page Framing Table to be composed of a simplified architecture that included Dynamic and Static Address Translation.

In the 1024 page memory model, each entry in a PFT Frame for a Process would consist only of:

a) a 2 to 5 byte Virtual Address Translation address that pointed to the origin of the object's page in the overall Virtual Address Space of the Machine/Network Address Space (it's IDENTITY Address)

b) a Single Byte to reflect the current status of the Page
      1b - in or out of Main Storage     2b - in static (Ram or Origin) or dynamic area (Swap File or Unknown)    
      3b - checking, known                4b - Fault or not Fault           5b - occupied or available
      6b- locked or unlocked               7b  - wired or unwired           8b - shared or exclusive 
      9b- reusable or unique              10b - Emergency Bit (reserved for hard firewall and failure responses)

b) a 2 to 4 byte Physical Address that points to it's location in physical memory (depending upon 1b-2b)
    it's location in the Swap File, or if a Network Address or Device, it's Device Driver Origin in Device
    Drive Address Space.

In future operating system implementations, each process would itself include a PFT management routine, a mirror of the global PFT management routine, with limited functionality, that would be memory resident, allowing a program to agree to share pages (or not), to request pages be wired into ram, and to determine available resources so as to maintain a working set of 1024 virtual pages for a much larger address space, something typically of use by Data Base Management applications.

FURTHER DRAFTS WILL BE PUBLISHED UPON NOTICE.  IF YOU HAVE ANY CORRECTIONS TO SUGGEST TO THE FOREGOING, PLEASE ADVISE IN WRITING TO: ResearchPublications@acsa.net.

 

© Copyright 2001, 2002 American Computer Scientists Association Inc.  All rights reserved.
NOTICE: The idea behind the 10 Bit Byte based computer is patent pending the original inventor.
It may not be used without his permission.