History of ARM. Helpful: ARM processors, what are they? The role of CPU and GPU from ARM in the market

Everyone who is interested in mobile technologies has heard about the ARM architecture. At the same time, for most people, this is associated with the processors of tablets or smartphones. Others correct them, specifying that this is not the stone itself, but only its architecture. But practically none of them was certainly not interested in where and when this technology arose.

Meanwhile, this technology is widespread among numerous modern gadgets, which are becoming more and more every year. In addition, on the path of development of a company that has developed ARM processors, there is one interesting case that is not a sin to mention, perhaps for someone it will become a lesson for the future.

ARM architecture for dummies

Under the abbreviation ARM hides a fairly successful British company ARM Limited in the field of IT technologies. It stands for Advanced RISC Machines and is one of the world's major developers and licensors of the 32-bit RISC processor architecture that powers most portable devices.

But, characteristically, the company itself is not engaged in the production of microprocessors, but only develops and licenses its technology to other parties. In particular, the ARM-architecture of microcontrollers is purchased by the following manufacturers:

  • Atmel.
  • Cirrus Logic.
  • Intel.
  • Apple.
  • nVidia.
  • HiSilicon.
  • Marvell.
  • Samsung.
  • Qualcomm.
  • Sony Ericsson.
  • Texas Instruments.
  • Broadcom.

Some of them are known to a wide audience of consumers of digital gadgets. According to the assurances of the British corporation ARM, the total number of microprocessors produced using their technology is more than 2.5 billion. There are several series of mobile stones:

  • ARM7 - clock frequency 60-72 MHz, which is relevant for budget mobile phones.
  • ARM9/ ARM9E - the frequency is already higher, about 200 MHz. Such microprocessors are equipped with more functional smartphones and pocket computers (PDAs).

Cortex and ARM11 are already more advanced microprocessor families than past ARM microcontroller architectures, with clock speeds up to 1 GHz and advanced digital signal processing capabilities.

The popular xScale microprocessors from Marvell (until mid-summer 2007, the project was at the disposal of Intel) are actually an extended version of the ARM9 architecture, supplemented by the Wireless MMX instruction set. This solution from Intel was focused on supporting multimedia applications.

ARM technology refers to a 32-bit microprocessor architecture containing a reduced instruction set, referred to as RISC. According to the calculations, the use of ARM processors is 82% of the total number of RISC processors produced, which indicates a fairly wide coverage area for 32-bit systems.

Many electronic devices are equipped with an ARM processor architecture, and these are not only PDAs and cell phones, but also portable game consoles, calculators, computer peripherals, network equipment and much more.

A little trip back to the past

Let's go on an imaginary time machine a few years ago and try to figure out how it all began. It's safe to say that ARM is more of a monopoly in its field. And this is confirmed by the fact that the vast majority of smartphones and other electronic digital devices operate under the control of microprocessors created according to this architecture.

In 1980, Acorn Computers was founded, which began to create personal computers. Therefore, ARM was previously introduced as Acorn RISC Machines.

A year later, a home version of the BBC Micro PC with the very first ARM processor architecture was presented to consumers. It was a success, however, the chip did not cope with graphics tasks, and other options in the face of the Motorola 68000 and National Semiconductor 32016 processors were also not suitable for this.

Then the company's management thought about creating their own microprocessor. Engineers were interested in a new processor architecture, invented by graduates of a local university. It just used the reduced instruction set, or RISC. And after the appearance of the first computer, which was controlled by the Acorn Risc Machine processor, success came pretty quickly - in 1990, an agreement was signed between the British brand and Apple. This marked the beginning of the development of a new chipset, which, in turn, led to the formation of an entire development team, referred to as Advanced RISC Machines, or ARM.

Starting in 1998, the company changed its name to ARM Limited. And now specialists are not engaged in the production and implementation of the ARM architecture. What did it give? This in no way affected the development of the company, although the main and only direction of the company was the development of technologies, as well as the sale of licenses to third-party companies so that they could use the processor architecture. At the same time, some companies acquire the rights to ready-made cores, while others equip processors with their own cores under a purchased license.

According to some data, the company's earnings on each such solution is 0.067 $. But this information is average and outdated. The number of cores in chipsets is growing every year, and, accordingly, the cost of modern processors exceeds old samples.

Application area

It was the development of mobile devices that brought huge popularity to ARM Limited. And when the production of smartphones and other portable electronic devices became widespread, energy-efficient processors were immediately used. I wonder if there is linux on the arm architecture?

The climax of the development of ARM falls on 2007, when partnerships with the Apple brand were renewed. After that, the first iPhone based on an ARM processor was presented to the consumer's court. Since that time, such a processor architecture has become an invariable component of almost any manufactured smartphone that can only be found on the modern mobile market.

We can say that almost every modern electronic device that needs to be controlled by a processor is somehow equipped with ARM chips. And the fact that such a processor architecture supports many operating systems, be it Linux, Android, iOS, and Windows, is an undeniable advantage. Among them are Windows embedded CE 6.0 Core, the arm architecture is also supported by it. This platform is designed for handheld computers, mobile phones and embedded systems.

Distinctive features of x86 and ARM

Many users who have heard a lot about ARM and x86 confuse these two architectures a bit. And yet they have certain differences. There are two main types of architectures:

  • CISC (Complex Instruction Set Computing).
  • Computing).

CISC includes x86 processors (Intel or AMD), RISC, as you can already understand, is the ARM family. The x86 architecture and arm have their fans. Thanks to the efforts of ARM specialists, who emphasized energy efficiency and the use of a simple instruction set, processors greatly benefited from this - the mobile market began to rapidly develop, and many smartphones almost equated with the capabilities of computers.

In turn, Intel has always been famous for the release of processors with high performance and bandwidth for desktop PCs, laptops, servers and even supercomputers.

These two families won the hearts of users in their own way. But what is their difference? There are several distinctive features or even features, we will analyze the most important of them.

Processing power

Let's start the analysis of the differences between the ARM and x86 architectures with this parameter. A feature of RISC professors is to use as few instructions as possible. Moreover, they should be as simple as possible, which gives them advantages not only for engineers, but also for software developers.

The philosophy here is simple - if the instruction is simple, then too many transistors are not needed for the desired circuit. As a result, additional space is freed up for something, or the size of the chips becomes smaller. For this reason, ARM microprocessors began to combine peripheral devices, such as graphics processors. A case in point is the Raspberry Pi computer, which has a minimal number of components.

However, the simplicity of the instructions comes at a cost. To perform certain tasks, additional instructions are needed, which usually leads to an increase in memory consumption and time to complete tasks.

Unlike the arm-architecture of the processor, the instructions of CISC chips, which are solutions from Intel, can perform complex tasks with great flexibility. In other words, RISC-based machines perform operations between registers, and it is usually required that the program loads variables into a register before performing the operation. CISC processors are capable of performing operations in several ways:

  • between registers;
  • between a register and a memory location;
  • between memory cells.

But this is only part of the distinguishing features, let's move on to the analysis of other features.

Power consumption

Depending on the type of device, the power consumption may have a different degree of significance. For a system that is connected to a permanent power source (mains), there is simply no limit to energy consumption. However, mobile phones and other electronic gadgets are completely dependent on power management.

Another difference between the arm and x86 architectures is that the former has a power consumption of less than 5 watts, including many related packages: GPUs, peripherals, memory. This low power is due to fewer transistors combined with relatively low speeds (to compare with desktop processors). At the same time, this has taken a toll on performance - complex operations take longer to complete.

Intel cores are distinguished by their complex structure and, therefore, their energy consumption is significantly higher. For example, a high-performance Intel I-7 processor consumes about 130 watts of energy, mobile versions - 6-30 watts.

Software

It is quite difficult to make a comparison on this parameter, since both brands are very popular in their circles. Devices based on arm-architecture processors work great with mobile operating systems (Android, etc.).

Machines running Intel processors are capable of running platforms like Windows and Linux. In addition, both families of microprocessors are friendly with applications written in the Java language.

Analyzing the differences in architectures, one thing can be said unequivocally - ARM processors mainly control the power consumption of mobile devices. The task of desktop solutions is most of all to ensure high performance.

New achievements

ARM, through its intelligent policy, has completely taken over the mobile market. But in the future, she is not going to stop there. Not so long ago, a new development of cores was introduced: Cortex-A53, and Cortex-A57, in which one important update was carried out - support for 64-bit computing.

The A53 core is a direct successor to the ARM Cortex-A8, which, although it did not have very high performance, but the power consumption was at a minimum level. According to experts, the architecture of the architecture has reduced power consumption by 4 times, and in terms of performance it will not be inferior to the Cortex-A9 core. And this despite the fact that the core area of ​​the A53 is 40% smaller than that of the A9.

The A57 core will replace the Cortex-A9 and Cortex-A15. At the same time, ARM engineers claim a phenomenal performance increase - three times higher than that of the A15 core. In other words, the A57 microprocessor will be 6 times faster than the Cortex-A9, and its energy efficiency will be 5 times better than the A15.

To summarize, the cortex series, namely the more advanced a53, differs from its predecessors in higher performance against the background of equally high energy efficiency. Even the Cortex-A7 processors found in most smartphones can't compete!

But what is more valuable is that the architecture of the arm cortex a53 is the one that will avoid problems associated with lack of memory. In addition, the device will drain the battery more slowly. Thanks to the novelty, these problems will now remain in the distant past.

Graphic solutions

In addition to developing processors, ARM is working on the implementation of Mali series graphics accelerators. And the very first of them is Mali 55. The LG Renoir phone was equipped with this accelerator. And yes, this is the most common mobile phone. Only in it, the GPU was not responsible for the games, but only rendered the interface, because, judging by modern standards, the graphics processor has primitive capabilities.

But progress inexorably flies forward and therefore, in order to keep up with the times, ARM also has more advanced models that are relevant for mid-range smartphones. We are talking about the common GPU Mali-400 MP and Mali-450 MP. Although they have low performance and a limited set of APIs, this does not prevent them from being used in modern mobile models. A striking example is the Zopo ZP998 phone, in which the eight-core MTK6592 chip is paired with the Mali-450 MP4 graphics accelerator.

Competitiveness

At present, no one is opposed to ARM yet, and this is mainly due to the fact that the right decision was made at one time. But once upon a time, at the beginning of its journey, a development team worked on creating processors for PCs and even made an attempt to compete with such a giant as Intel. But even after the direction of activity was changed, the company had a hard time.

And when the world-famous computer brand Microsoft entered into an agreement with Intel, other manufacturers simply had no chance - the Windows operating system refused to work with ARM processors. How can you not resist using gcam emulators for the arm architecture ?! As for Intel, watching the wave of success of ARM Limited, they also tried to create a processor that would be worthy of competition. For this, the Intel Atom chip was made available to the general public. But it took a much longer period of time than the ARM Limited. And the chip went into production only in 2011, but precious time had already been lost.

Basically, Intel Atom is a CISC processor with x86 architecture. The specialists managed to achieve lower power consumption than in ARM solutions. Nevertheless, all the software that comes out for mobile platforms is poorly adapted to the x86 architecture.

In the end, the company recognized the complete ubiquity of the decision and subsequently abandoned the production of processors for mobile devices. The only major manufacturer of Intel Atom chips is ASUS. At the same time, these processors have not sunk into oblivion, they were massively equipped with netbooks, nettops and other portable devices.

However, there is a possibility that the situation will change and the beloved Windows operating system will support ARM microprocessors. In addition, steps are being taken in this direction, maybe something like gcam emulators on the ARM architecture for mobile solutions will appear?! Who knows, time will tell and put everything in its place.

There is one interesting moment in the history of the development of ARM (at the very beginning of the article, it was he who was meant). Once at the heart of ARM Limited was Apple, and it is likely that all ARM technology would have belonged to it. However, fate decreed otherwise - in 1998, Apple was in a crisis, and the management was forced to sell its stake. It is currently on par with other manufacturers and remains sourcing technology from ARM Limited for its iPhones and iPads. Who could have known how things could turn out?!

Modern ARM processors are capable of more complex operations. And in the near future, the company's management aims to enter the server market, in which it is undoubtedly interested. In addition, in our modern time, when the era of the development of the Internet of Things (IoT) is approaching, including “smart” household appliances, we can predict an even greater demand for chips with ARM architecture.

So ARM Limited has a far from hopeless future ahead! And it is unlikely that in the near future there will be someone who can oust such, no doubt, a mobile giant in the development of processors for smartphones and other similar electronic devices.

As a conclusion

ARM processors quickly captured the mobile device market and all thanks to low power consumption and, albeit not very high, but still good performance. At present, the state of affairs at ARM can only be envied. Many manufacturers use its technologies, which puts Advanced RISC Machines on a par with such giants in the field of processor development as Intel and AMD. And despite the fact that the company does not have its own production.

For some time, MIPS with the architecture of the same name was a competitor of the mobile brand. But at present, there is still the only serious competitor in the face of Intel Corporation, although its management does not believe that the arm architecture can pose a threat to its market share.

Also, according to experts from Intel, ARM processors are not capable of running desktop versions of operating systems. However, such a statement sounds a bit illogical, because the owners of ultra-mobile PCs do not use "heavyweight" software. In most cases, you need access to the Internet, editing documents, listening to media files (music, movies) and other simple tasks. And ARM solutions do an excellent job with such operations.

The first ARM chips appeared three decades ago thanks to the efforts of the British company Acorn Computers (now ARM Limited), but for a long time they were in the shadow of their more famous counterparts - x86 architecture processors. Everything turned upside down with the transition of the IT industry to the post-computer era, when the ball was no longer ruled by PCs, but by mobile gadgets.

It's worth starting, perhaps, with the fact that in the x86 processor architecture, which is now used by Intel and AMD, the CISC (Complex Instruction Set Computer) instruction set is used, although not in its pure form. So, a large number of complex commands in their structure, which for a long time was a hallmark of CISC, are first decoded into simple ones, and only then processed. It is clear that this whole chain of actions takes a lot of energy.

The ARM architecture chips with the Reduced Instruction Set Computer (RISC) instruction set act as an energy-efficient alternative. Its advantage is in the initially small set of simple commands that are processed at minimal cost. As a result, two processor architectures, x86 and ARM, coexist peacefully (in fact, not very peacefully) on the consumer electronics market, each of which has its own advantages and disadvantages.


The x86 architecture is positioned as more versatile in terms of tasks it can do, including even resource-intensive ones such as photo, music and video editing, as well as data encryption and compression. In turn, the ARM architecture "leaves" due to extremely low power consumption and, in general, sufficient performance for the most important purposes today: drawing web pages and playing media content.


Business model of ARM Limited

Now ARM Limited is only engaged in the development of reference processor architectures and their licensing. The creation of specific chip models and their subsequent mass production is already the business of ARM licensees, of which there are a great many. Among them are companies known only in narrow circles like STMicroelectronics, HiSilicon and Atmel, as well as IT giants, whose names are on everyone's lips - Samsung, NVIDIA and Qualcomm. The full list of licensee companies can be found on the corresponding page of the official website of ARM Limited.


Such a large number of licensees is primarily due to the abundance of applications for ARM processors, and mobile gadgets are just the tip of the iceberg. Inexpensive and energy efficient chips are used in embedded systems, network equipment and measuring instruments. Payment terminals, external 3G modems and sports heart rate monitors are all based on the ARM processor architecture.


According to analysts, ARM Limited itself earns $0.067 in royalties from each chip produced. But this is a very average amount, because the cost of the latest multi-core processors is significantly superior to single-core chips of outdated architecture.

Single chip system

From a technical point of view, calling ARM architecture chips processors is not entirely correct, because in addition to one or more computing cores, they include a number of related components. More appropriate in this case are the terms single-chip system and system-on-a-chip (from the English system on a chip).

So, the latest single-chip systems for smartphones and tablet computers include a RAM controller, a graphics accelerator, a video decoder, an audio codec, and optional wireless communication modules. Highly specialized chips may include additional controllers for interacting with peripheral devices such as sensors.


Individual components of a single-chip system can be developed directly by ARM Limited or by third parties. A striking example of this are graphics accelerators, which, in addition to ARM Limited (Mali graphics), are being developed by Qualcomm (Adreno graphics) and NVIDIA (GeForce ULP graphics).

Do not forget about the Imagination Technologies company, which does nothing else but design PowerVR graphics accelerators at all. But it is she who owns almost half of the global mobile graphics market: Apple and Amazon gadgets, Samsung Galaxy Tab 2 tablets, as well as inexpensive smartphones based on MTK processors.

Legacy Chip Generations

Obsolete, but still widespread processor architectures are ARM9 and ARM11, which belong to the ARMv5 and ARMv6 families, respectively.

ARM9. ARM9 chips can reach clock speeds of 400 MHz and are most likely the ones installed inside your wireless router and an old but still reliable mobile phone like the Sony Ericsson K750i and Nokia 6300. Critically important for ARM9 chips is the Jazelle instruction set, which allows comfortable working with Java applications (Opera Mini, Jimm, Foliant, etc.).

ARM11. ARM11 processors boast an extended set of instructions compared to ARM9 and a much higher clock speed (up to 1 GHz), although their power is also not enough for modern tasks. However, due to low power consumption and, no less important, cost, ARM11 chips are still used in entry-level smartphones: Samsung Galaxy Pocket and Nokia 500.

Modern generations of chips

All more or less new ARM architecture chips belong to the ARMv7 family, the flagship representatives of which have already reached the mark of eight cores and a clock frequency of over 2 GHz. The processor cores developed directly by ARM Limited belong to the Cortex line and most of the manufacturers of single-chip systems use them without significant changes. Only Qualcomm and Apple have created their own modifications based on ARMv7 - the first called their creations Scorpion and Krait, and the second - Swift.


ARM Cortex-A8. Historically, the first processor core of the ARMv7 family was Cortex-A8, which formed the basis of such well-known SoCs of its time as Apple A4 (iPhone 4 and iPad) and Samsung Hummingbird (Samsung Galaxy S and Galaxy Tab). It demonstrates about twice the performance compared to the predecessor ARM11. In addition, the Cortex-A8 core received a NEON coprocessor for processing high-resolution video and support for the Adobe Flash plugin.

True, all this had a negative impact on the power consumption of Cortex-A8, which is significantly higher than that of ARM11. Despite the fact that ARM Cortex-A8 chips are still used in budget tablets (Allwiner Boxchip A10 single-chip system), their days on the market, apparently, are numbered.

ARM Cortex-A9. Following Cortex-A8, ARM Limited introduced a new generation of chips - Cortex-A9, which is now the most common and occupies a middle price niche. The performance of the Cortex-A9 cores has increased by about three times compared to the Cortex-A8, and it is also possible to combine them two or even four on a single chip.

The NEON coprocessor has already become optional: NVIDIA has eliminated it in its Tegra 2 single-chip system, deciding to free up more space for the graphics accelerator. True, nothing good came of this, because most video player applications still focused on the time-tested NEON.


It was during the "reign" of Cortex-A9 that the first implementations of the big.LITTLE concept proposed by ARM Limited appeared, according to which single-chip systems should have both powerful and weak, but energy-efficient processor cores. The first implementation of the big.LITTLE concept was an NVIDIA Tegra 3 system-on-a-chip with four Cortex-A9 cores (up to 1.7 GHz) and a fifth energy-efficient companion core (500 MHz) for simple background tasks.

ARM Cortex-A5 and Cortex-A7. When designing the Cortex-A5 and Cortex-A7 processor cores, ARM Limited pursued the same goal - to achieve a compromise between the minimum power consumption of ARM11 and the acceptable speed of the Cortex-A8. We didn’t forget about the possibility of combining two or four cores - multi-core Cortex-A5 and Cortex-A7 chips are gradually appearing on sale (Qualcomm MSM8625 and MTK 6589).


ARM Cortex-A15. The Cortex-A15 processor cores became a logical continuation of the Cortex-A9 - as a result, for the first time in history, ARM architecture chips managed to roughly match the performance of Intel Atom, and this is already a great success. It is not for nothing that Canonical has specified a dual-core ARM Cortex-A15 processor or a similar Intel Atom in the system requirements for the version of the Ubuntu Touch OS with full multitasking.


Numerous gadgets based on NVIDIA Tegra 4 with four ARM Cortex-A15 cores and a fifth companion core Cortex-A7 will go on sale very soon. Following NVIDIA, the big.LITTLE concept was picked up by Samsung: the “heart” of the Galaxy S4 smartphone was the Exynos 5 Octa chip with four Cortex-A15 cores and the same number of energy-efficient Cortex-A7 cores.


Future prospects

Mobile gadgets based on Cortex-A15 chips have not yet really appeared on sale, and the main trends in the further development of the ARM architecture are already known. ARM Limited has already officially unveiled the next family of ARMv8 processors, which will be mandatory 64-bit. The Cortex-A53 and Cortex-A57 cores open up a new era of RISC processors: the first is energy efficient and the second is high-performance, but both are capable of working with large amounts of RAM.

Manufacturers of consumer electronics have not yet become particularly interested in the ARMv8 processor family, but new licensees have loomed on the horizon, planning to bring ARM chips to the server market: AMD and Calxeda. The idea is innovative, but it has the right to life: the same NVIDIA Tesla graphics accelerators, consisting of a large number of simple cores, have proven their effectiveness as server solutions in practice.

The name ARM has certainly been heard by everyone interested in mobile technology. Many understand this abbreviation as a type of processor for smartphones and tablets, while others specify that this is not a processor at all, but its architecture. And certainly few people delved into the history of the emergence of ARM. In this article, we will try to understand all these nuances and tell you why modern gadgets need ARM processors.

A brief excursion into history

When asked for "ARM", Wikipedia gives two meanings for this abbreviation: Acorn RISC Machine and Advanced RISC Machines. Let's start in order. In the 1980s, Acorn Computers was founded in the UK, which began its activities by creating personal computers. At that time, Acorn was also called the "British Apple". The decisive period for the company came in the late 1980s, when its chief engineer took advantage of the decision of two local university graduates to come up with a new kind of reduced instruction set (RISC) processor architecture. This is how the first computer based on the Acorn Risc Machine processor appeared. Success was not long in coming. In 1990, the British entered into an agreement with Apple and soon began work on a new version of the chipset. As a result, the development team formed a company called Advanced RISC Machines, similar to the processor. Chips with the new architecture also became known as the Advanced Risc Machine, or ARM for short.

Since 1998, Advanced Risc Machine has become known as ARM Limited. At the moment, the company is not engaged in the production and sale of its own processors. The main and only activity of ARM Limited is the development of technologies and the sale of licenses to various companies to use the ARM architecture. Some manufacturers buy a license for off-the-shelf cores, others a so-called "architectural license" to produce processors with their own cores. These companies include Apple, Samsung, Qualcomm, nVidia, HiSilicon and others. According to some reports, ARM Limited earns $0.067 on each such processor. This figure is average and also outdated. Every year there are more and more cores in chipsets, and new multi-core processors outperform obsolete samples at cost.

Technical features of ARM chips

There are two types of modern processor architectures: CISC(Complex Instruction Set Computing) and RISC(Reduced Instruction Set Computing). The CISC architecture refers to the x86 processor family (Intel and AMD), while the RISC architecture refers to the ARM family. The main formal difference between RISC and CISC and, accordingly, x86 and ARM is the reduced instruction set used in RISC processors. So, for example, each instruction in the CISC architecture is transformed into several RISC instructions. In addition, RISC processors use fewer transistors and thus consume less power.

The main priority of ARM processors is the ratio of performance to power consumption. ARM has a higher performance-per-watt ratio than x86. You can get the power you need from 24 x86 cores or from hundreds of small, low power ARM cores. Of course, even the most powerful processor on the ARM architecture will never be comparable in power to the Intel Core i7. But the same Intel Core i7 needs an active cooling system and will never fit in a phone case. Here ARM is out of competition. On the one hand, it looks like an attractive option for building a supercomputer using a million ARM processors instead of a thousand x86 processors. On the other hand, the two architectures cannot be unambiguously compared. In some ways, the advantage will be for ARM, and in some ways - for x86.

However, calling ARM architecture chips processors is not entirely correct. In addition to several processor cores, they also include other components. The most appropriate term would be "single-chip system" or "system on a chip" (SoC). Modern single-chip systems for mobile devices include a RAM controller, a graphics accelerator, a video decoder, an audio codec, and wireless communication modules. As mentioned earlier, individual chipset components can be developed by third-party manufacturers. The most striking example of this is the graphics cores, which are being developed in addition to ARM Limited (Mali graphics), by Qualcomm (Adreno), NVIDIA (GeForce ULP) and Imagination Technologies (PowerVR).


In practice, it looks like this. Most budget Android mobile devices come with chipsets manufactured by the company. MediaTek, which almost invariably follows the instructions of ARM Limited and completes them with Cortex-A cores and Mali graphics (less often PowerVR).


A-brands for their flagship devices often use chipsets manufactured by Qualcomm. By the way, the latest Qualcomm Snapdragon chips (,) are equipped with fully custom Kryo cores for the central processor and Adreno for the graphics accelerator.

Concerning Apple, then for the iPhone and iPad, the company uses its own A-series chips with PowerVR graphics accelerator, which are produced by third-party companies. So, a 64-bit quad-core A10 Fusion processor and a PowerVR GT7600 graphics processor are installed.


The architecture of processors of the family is considered relevant at the time of writing the article. ARMv8. It was the first to use a 64-bit instruction set and support more than 4 GB of RAM. The ARMv8 architecture is backward compatible with 32-bit applications. The most efficient and most powerful processor core developed by ARM Limited so far is Cortex-A73, and most SoC manufacturers use it unchanged.


Cortex-A73 delivers 30% faster performance than Cortex-A72 and supports the full set of ARMv8 architectures. The maximum frequency of the processor core is 2.8 GHz.

Scope of use of ARM

The greatest glory of ARM brought the development of mobile devices. In anticipation of the mass production of smartphones and other portable equipment, energy-efficient processors came in handy. The culmination of the development of ARM Limited was in 2007, when the British company renewed its partnership with Apple, and some time later, the Cupertinians introduced their first iPhone with an ARM architecture processor. Subsequently, the single-chip system based on the ARM architecture has become an invariable component of almost all smartphones on the market.


ARM Limited's portfolio is not limited to the Cortex-A family of cores. In fact, under the Cortex brand, there are three series of processor cores, which are denoted by the letters A, R, M. Core family Cortex-A, as we already know, is the most powerful. They are mainly used in smartphones, tablets, set-top boxes, satellite receivers, automotive systems, robotics. Processor cores Cortex-R are optimized to perform high-performance tasks in real time, so such chips are found in medical equipment, autonomous security systems, and storage media. The main task of the family Cortex-M is simplicity and low cost. Technically, these are the weakest processor cores with the lowest power consumption. Processors based on such cores are used almost everywhere where the device requires minimal power and low cost: sensors, controllers, alarms, displays, smart watches and other equipment.

In general, most of today's devices, from small to large, requiring a CPU use ARM chips. A huge plus is the fact that the ARM architecture is supported by many operating systems based on Linux (including Android and Chrome OS), iOS, and Windows (Windows Phone).

Competition in the market and prospects for the future

Admittedly, at the moment, ARM has no serious competitors. And by and large, this is due to the fact that ARM Limited made the right choice at a certain time. But at the very beginning of its journey, the company produced processors for PCs and even tried to compete with Intel. After ARM Limited changed the direction of its activities, it was also not easy for her. Then the software monopoly represented by Microsoft, having entered into a partnership agreement with Intel, left no chance for other manufacturers, including ARM Limited - Windows simply did not work on systems with ARM processors. No matter how paradoxical it may sound, but now the situation may change dramatically, and Windows is already ready to support processors based on this architecture.


In the wake of the success of ARM chips, Intel made an attempt to create a competitive processor and entered the market with a chip Intel Atom. To do this, it took her much more time than ARM Limited. The chipset entered production in 2011, but, as they say, the train has already left. The Intel Atom is an x86 CISC processor. The company's engineers have achieved lower power consumption than ARM, but currently a variety of mobile software has poor adaptation to the x86 architecture.


Last year, Intel abandoned several key decisions in the further development of mobile systems. Actually a company for mobile devices as they have become unprofitable. The only major manufacturer that bundled their smartphones with Intel Atom chipsets was ASUS. However, Intel Atom still received massive use in netbooks, nettops and other portable devices.

ARM Limited's position in the market is unique. At the moment, almost all manufacturers use its developments. At the same time, the company does not have its own factories. This does not prevent her from standing on a par with Intel and AMD. The history of ARM includes another curious fact. It is possible that now ARM technology could belong to Apple, which was at the heart of the formation of ARM Limited. Ironically, in 1998, the Cupertinos, going through times of crisis, sold their stake. Now Apple is forced, along with other companies, to buy a license for the ARM processors used in the iPhone and iPad.

Now ARM processors are capable of performing serious tasks. In the near future, they will be used in servers, in particular, Facebook and PayPal data centers already have such solutions. In the era of the Internet of Things (IoT) and smart home devices, ARM chips have become even more in demand. So the most interesting thing for ARM is yet to come.

How is the processor. Why is ARM the future? The modern consumer of electronics is very difficult to surprise. We are already accustomed to the fact that our pocket is legally occupied by a smartphone, a laptop is in a bag, a “smart” watch obediently counts steps on the hand, and headphones with an active noise reduction system caress our ears.

It's a funny thing, but we are used to carrying not one, but two, three or more computers at once. After all, that's what you can call a device that has a processor. And it doesn’t matter what a particular device looks like. A miniature chip is responsible for its work, having overcome a turbulent and rapid path of development.

Why did we bring up the topic of processors? Everything is simple. Over the past ten years, there has been a real revolution in the world of mobile devices.

There are only 10 years difference between these devices. But Nokia N95 then seemed to us a space device, and today we look at ARKit with a certain mistrust

But everything could have turned out differently and the battered Pentium IV would have remained the ultimate dream of an ordinary buyer.

We tried to do without complicated technical terms and tell how the processor works and find out which architecture is the future.

1. How it all started

The first processors were completely different from what you can see when you open the lid of your PC system unit.

Instead of microcircuits in the 40s of the XX century, electromechanical relays were used, supplemented by vacuum tubes. The lamps acted as a diode, the state of which could be regulated by lowering or increasing the voltage in the circuit. The structures looked like this:

For the operation of one gigantic computer, hundreds, sometimes thousands of processors were needed. But, at the same time, you would not be able to run even a simple editor like NotePad or TestEdit from the standard set of Windows and macOS on such a computer. The computer would simply not have enough power.

2. The advent of transistors

The first field-effect transistors appeared in 1928. But the world changed only after the appearance of the so-called bipolar transistors, discovered in 1947.

In the late 1940s, experimental physicist Walter Brattain and theorist John Bardeen developed the first point transistor. In 1950, it was replaced by the first junction transistor, and in 1954, the well-known manufacturer Texas Instruments announced a silicon transistor.

But the real revolution came in 1959, when the scientist Jean Henri developed the first silicon planar (flat) transistor, which became the basis for monolithic integrated circuits.

Yes, it's a bit tricky, so let's dig a little deeper and deal with the theoretical part.

3. How a transistor works

So, the task of such an electrical component as a transistor is to control the current. Simply put, this little tricky switch controls the flow of electricity.

The main advantage of a transistor over a conventional switch is that it does not require the presence of a person. Those. such an element is capable of independently controlling the current. In addition, it works much faster than you would turn on or off the electrical circuit yourself.

The task of the computer is to represent the electric current in the form of numbers.

And if earlier the task of switching states was performed by clumsy, bulky and inefficient electrical relays, now the transistor has taken over this routine work.

From the beginning of the 60s, transistors began to be made from silicon, which made it possible not only to make processors more compact, but also to significantly increase their reliability.

But first, let's deal with the diode

Silicon (aka Si - “silicium” in the periodic table) belongs to the category of semiconductors, which means that, on the one hand, it transmits current better than a dielectric, on the other hand, it does it worse than metal.

Whether we like it or not, but to understand the work and the further history of the development of processors, we will have to plunge into the structure of one silicon atom. Don't be afraid, let's make it short and very clear.

The task of the transistor is to amplify a weak signal due to an additional power source.

The silicon atom has four electrons, thanks to which it forms bonds (or, to be more precise, covalent bonds) with the same nearby three atoms, forming a crystal lattice. While most of the electrons are in bond, a small part of them is able to move through the crystal lattice. It is because of this partial transfer of electrons that silicon was classified as a semiconductor.

But such a weak movement of electrons would not allow the use of a transistor in practice, so the scientists decided to increase the performance of transistors by doping, or, more simply, adding atoms to the silicon crystal lattice with a characteristic arrangement of electrons.

So they began to use a 5-valent impurity of phosphorus, due to which n-type transistors were obtained. The presence of an additional electron made it possible to accelerate their movement, increasing the current flow.

When doping p-type transistors, boron, which includes three electrons, became such a catalyst. Due to the absence of one electron, holes appear in the crystal lattice (they play the role of a positive charge), but due to the fact that electrons are able to fill these holes, the conductivity of silicon increases significantly.

Suppose we took a silicon wafer and doped one part of it with a p-type impurity, and the other with an n-type impurity. So we got a diode - the basic element of a transistor.

Now the electrons located in the n-part will tend to go to the holes located in the p-part. In this case, the n-side will have a slight negative charge, and the p-side will have a positive charge. The electric field formed as a result of this "gravitation" - the barrier - will prevent the further movement of electrons.

If you connect a power source to the diode in such a way that "-" touches the p-side of the plate, and "+" touches the n-side, current flow will not be possible due to the fact that the holes will be attracted to the negative contact of the power source, and the electrons to positive, and the bond between the p and n electrons will be lost due to the expansion of the combined layer.

But if you connect the power supply with sufficient voltage the other way around, i.e. "+" from the source to the p-side, and "-" to the n-side, electrons placed on the n-side will be repelled by the negative pole and pushed to the p-side, occupying holes in the p-region.

But now the electrons are attracted to the positive pole of the power source and they continue to move through the p-holes. This phenomenon is called forward bias of the diode.

diode + diode = transistor

By itself, the transistor can be thought of as two diodes docked to each other. In this case, the p-region (the one where the holes are located) becomes common for them and is called the “base”.

The N-P-N transistor has two n-regions with additional electrons - they are also the “emitter” and “collector” and one, weak region with holes - the p-region, called the “base”.

If you connect a power supply (let's call it V1) to n-regions of the transistor (regardless of the pole), one diode will be reverse-biased and the transistor will be in the off state.

But, as soon as we connect another power source (let's call it V2), setting the "+" contact to the "central" p-region (base), and the "-" contact to the n-region (emitter), some of the electrons will flow through again formed chain (V2), and the part will be attracted by the positive n-region. As a result, electrons will flow into the collector region, and a weak electric current will be amplified.

Exhale!

4. So how does a computer actually work?

And now the most important thing.

Depending on the applied voltage, the transistor can be either open or closed. If the voltage is insufficient to overcome the potential barrier (the same one at the junction of p and n plates) - the transistor will be in the closed state - in the “off” state or, in the language of the binary system, “0”.

With enough voltage, the transistor turns on, and we get the value "on" or "1" in binary.

This state, 0 or 1, is called a "bit" in the computer industry.

Those. we get the main property of the very switch that opened the way to computers for mankind!

In the first electronic digital computer ENIAC, or, more simply, the first computer, about 18 thousand triode lamps were used. The size of the computer was comparable to a tennis court, and its weight was 30 tons.

To understand how the processor works, there are two more key points to understand.

Moment 1. So, we have decided what a bit is. But with its help, we can only get two characteristics of something: either "yes" or "no". In order for the computer to learn to understand us better, they came up with a combination of 8 bits (0 or 1), which they called a byte.

Using a byte, you can encode a number from zero to 255. Using these 255 numbers - combinations of zeros and ones, you can encode anything.

Moment 2. The presence of numbers and letters without any logic would give us nothing. That is why the concept of logical operators appeared.

By connecting just two transistors in a certain way, you can achieve several logical actions at once: “and”, “or”. The combination of the amount of voltage on each transistor and the type of their connection allows you to get different combinations of zeros and ones.

Through the efforts of programmers, the values ​​\u200b\u200bof zeros and ones, the binary system, began to be translated into decimal so that we could understand what exactly the computer “says”. And to enter commands, our usual actions, such as entering letters from the keyboard, are represented as a binary chain of commands.

Simply put, imagine that there is a correspondence table, say, ASCII, in which each letter corresponds to a combination of 0 and 1. You pressed a button on the keyboard, and at that moment on the processor, thanks to the program, the transistors switched in such a way that the most written letter on the key.

5. And the transistor race began

After the British radio engineer Geoffrey Dahmer proposed in 1952 to place the simplest electronic components in a monolithic semiconductor crystal, the computer industry took a leap forward.

From the integrated circuits proposed by Dahmer, engineers quickly switched to microchips, which were based on transistors. In turn, several of these chips already formed the processor itself.

Of course, the dimensions of such processors are not much similar to modern ones. In addition, until 1964, all processors had one problem. They required an individual approach - their own programming language for each processor.

1964 IBM System/360. Universal Programming Code compatible computer. An instruction set for one processor model could be used for another.

70s. The appearance of the first microprocessors. Single chip processor from Intel. Intel 4004 - 10 µm TPU, 2300 transistors, 740 kHz.

1973 Intel 4040 and Intel 8008. 3,000 transistors, 740 kHz for the Intel 4040 and 3,500 transistors at 500 kHz for the Intel 8008.

1974 Intel 8080. 6 µm TPU and 6000 transistors. The clock frequency is about 5,000 kHz. It was this processor that was used in the Altair-8800 computer. The domestic copy of the Intel 8080 is the KR580VM80A processor, developed by the Kyiv Research Institute of Microdevices. 8 bits

1976 Intel 8080. 3 µm TPU and 6500 transistors. Clock frequency 6 MHz. 8 bits

1976 Zilog Z80. 3 micron TPU and 8500 transistors. Clock frequency up to 8 MHz. 8 bits

1978 Intel 8086. 3 µm TPU and 29,000 transistors. The clock frequency is about 25 MHz. The x86 instruction set that is still in use today. 16 bits

1980 Intel 80186. 3 µm TPU and 134,000 transistors. Clock frequency - up to 25 MHz. 16 bits

1982 Intel 80286. 1.5 µm TPU and 134,000 transistors. Frequency - up to 12.5 MHz. 16 bits

1982 Motorola 68000. 3 µm and 84,000 transistors. This processor was used in the Apple Lisa computer.

1985 Intel 80386. 1.5 µm Tp and 275,000 transistors. Frequency - up to 33 MHz in the 386SX version.

It would seem that the list could be continued indefinitely, but then Intel engineers faced a serious problem.

Out in the late 80s. Back in the early 60s, one of the founders of Intel, Gordon Moore, formulated the so-called "Moore's Law". It sounds like this:

Every 24 months, the number of transistors on an integrated circuit chip doubles.

It is difficult to call this law a law. It would be more accurate to call it empirical observation. Comparing the pace of technology development, Moore concluded that a similar trend could form.

But already during the development of the fourth generation of Intel i486 processors, engineers were faced with the fact that they had already reached the performance ceiling and could no longer fit more processors in the same area. At that time, technology did not allow this.

As a solution, a variant was found using a number of additional elements:

cache memory;

conveyor;

built-in coprocessor;

multiplier.

Part of the computational load fell on the shoulders of these four nodes. As a result, the appearance of cache memory, on the one hand, complicated the design of the processor, on the other hand, it became much more powerful.

The Intel i486 processor already consisted of 1.2 million transistors, and the maximum frequency of its operation reached 50 MHz.

In 1995, AMD joined the development and released the fastest i486-compatible Am5x86 processor at that time on a 32-bit architecture. It was already manufactured according to the 350 nanometer process technology, and the number of installed processors reached 1.6 million pieces. The clock frequency has increased to 133 MHz.

But the chipmakers did not dare to pursue further increasing the number of processors installed on a chip and developing the already utopian CISC (Complex Instruction Set Computing) architecture. Instead, the American engineer David Patterson proposed to optimize the operation of processors, leaving only the most necessary computational instructions.

So processor manufacturers switched to the RISC (Reduced Instruction Set Computing) platform. But even this was not enough.

In 1991, the 64-bit R4000 processor was released, operating at a frequency of 100 MHz. Three years later, the R8000 processor appears, and two years later, the R10000 with clock speeds up to 195 MHz. In parallel, the market for SPARC processors developed, the architecture feature of which was the absence of multiplication and division instructions.

Instead of fighting over the number of transistors, chip manufacturers began to rethink the architecture of their work. The rejection of "unnecessary" commands, the execution of instructions in one cycle, the presence of registers of general value and pipelining made it possible to quickly increase the clock frequency and power of processors without distorting the number of transistors.

Here are just a few of the architectures that appeared between 1980 and 1995:

They were based on the RISC platform, and in some cases, a partial, combined use of the CISC platform. But the development of technology once again pushed chipmakers to continue building up processors.

In August 1999, the AMD K7 Athlon entered the market, manufactured using a 250 nm process technology and including 22 million transistors. Later, the bar was raised to 38 million processors. Then, up to 250 million, the technological processor increased, the clock frequency increased. But, as physics says, there is a limit to everything.

7. The end of the transistor competition is near

In 2007, Gordon Moore made a very blunt statement:

Moore's Law will soon cease to apply. It is impossible to install an unlimited number of processors indefinitely. The reason for this is the atomic nature of matter.

It is noticeable to the naked eye that the two leading chip manufacturers AMD and Intel have clearly slowed down the pace of processor development over the past few years. The accuracy of the technological process has increased to only a few nanometers, but it is impossible to place even more processors.

And while semiconductor manufacturers are threatening to launch multilayer transistors, drawing a parallel with 3DNand memory, a serious competitor appeared at the walled x86 architecture 30 years ago.

8. What awaits "regular" processors

Moore's Law has been invalidated since 2016. This was officially announced by the largest processor manufacturer Intel. Doubling computing power by 100% every two years is no longer possible for chipmakers.

And now processor manufacturers have several unpromising options.

The first option is quantum computers. There have already been attempts to build a computer that uses particles to represent information. There are several similar quantum devices in the world, but they can only cope with algorithms of low complexity.

In addition, the serial launch of such devices in the coming decades is out of the question. Expensive, inefficient and… slow!

Yes, quantum computers consume much less power than their modern counterparts, but they will also be slower until developers and component manufacturers switch to new technology.

The second option is processors with layers of transistors. Both Intel and AMD have seriously thought about this technology. Instead of one layer of transistors, they plan to use several. It seems that in the coming years, processors may well appear in which not only the number of cores and clock frequency will be important, but also the number of transistor layers.

The solution has the right to life, and thus the monopolists will be able to milk the consumer for another couple of decades, but, in the end, the technology will again hit the ceiling.

Today, realizing the rapid development of the ARM architecture, Intel made a quiet announcement of the Ice Lake family of chips. The processors will be manufactured on a 10-nanometer process and will become the basis for smartphones, tablets and mobile devices. But it will happen in 2019.

9. ARM is the future So, the x86 architecture appeared in 1978 and belongs to the CISC platform type. Those. by itself, it implies the existence of instructions for all occasions. Versatility is the main strong point of x86.

But, at the same time, versatility played a cruel joke with these processors. x86 has several key disadvantages:

the complexity of commands and their frank confusion;

high energy consumption and heat release.

For high performance, I had to say goodbye to energy efficiency. Moreover, two companies are currently working on the x86 architecture, which can be safely attributed to monopolists. These are Intel and AMD. Only they can produce x86 processors, which means that only they rule the development of technologies.

At the same time, several companies are involved in the development of ARM (Arcon Risk Machine). Back in 1985, developers chose the RISC platform as the basis for further development of the architecture.

Unlike CISC, RISC involves designing a processor with the minimum required number of instructions, but maximum optimization. RISC processors are much smaller than CISC, more power efficient and simpler.

Moreover, ARM was originally created solely as a competitor to x86. The developers set the task to build an architecture that is more efficient than x86.

Ever since the 40s, engineers have understood that one of the priority tasks is to work on reducing the size of computers, and, first of all, the processors themselves. But almost 80 years ago, hardly anyone could have imagined that a full-fledged computer would be smaller than a matchbox.

For skeptical users who trawl through the top lines of Geekbench, I just want to remind you: in mobile technology, size is what matters first of all.

Place a candy bar with a powerful 18-core processor that “rips the ARM architecture to shreds” on the table, and then put your iPhone next to it. Feel the difference?

11. Instead of output

It is impossible to cover the 80-year history of the development of computers in one material. But after reading this article, you will be able to understand how the main element of any computer is arranged - the processor, and what to expect from the market in the coming years.

Of course, Intel and AMD will work on further increasing the number of transistors on a single chip and promoting the idea of ​​multilayer elements.

But do you, as a customer, need such power?

I don't think you're dissatisfied with the performance of an iPad Pro or the flagship iPhone X. I don't think you're dissatisfied with the performance of your multicooker in your kitchen or the picture quality on a 65-inch 4K TV. But all these devices use processors on the ARM architecture.

Windows has already officially announced that it is looking towards ARM with interest. The company included support for this architecture back in Windows 8.1, and is now actively working on a tandem with the leading ARM chipmaker Qualcomm.

Google also managed to look at ARM - the Chrome OS operating system supports this architecture. Several Linux distributions have appeared at once, which are also compatible with this architecture. And this is just the beginning.

And just try for a moment to imagine how pleasant it will be to combine an energy-efficient ARM processor with a graphene battery. It is this architecture that will make it possible to obtain mobile ergonomic gadgets that can dictate the future.

Hi all. In today's article, we will get acquainted with the ARM architecture. . And in subsequent entries we will work with these microcontrollers, increasing the productivity and functionality of projects. The micro AVRs we have already considered, and involved in various devices, for example, the latter, as a USB device, will be used as intermediate links in future projects.

What is ARM? Let's start with history. The abbreviation stands for Advanced RISC Machine - an advanced RISC machine, or - AcornRISC Machine. Where Acorn is the name of the combined entity and Advanced is the separate processor business. Acorn is a renamed CPU company that released its first computer Acorn System 1 in 1979, and was renamed in the same year. Formally, ARM Holdings was established in 1990, and more specifically, at the moment when when an agreement was signed between three companies: Apple Computer, Acorn Computers and VLSI Technology. You can read more about the history in the article at the following link: https://xakep.ru/2014/10/04/arm-history/.

ARM- a family of 32-bit and 64-bit microprocessor cores that are widely used in consumer electronics. Appeared when studying the documentation of the RISC project. The official Acorn RISC Machine project was started in October 1983. And the first ARM1 processor was produced on April 26, 1985. A year later, arm2 serial processors appeared. Next was the ARM3 family. ARM6 in 1992. And so on. The company itself does not produce chips and does not currently produce processors. The main business is the sale of licenses. For example, "architectural license" holders have the right to develop their own microprocessor cores that implement ARM instructions and use ARM patents. And in 2016, the Japanese company Softbank (the third largest operator in the Land of the Rising Sun) acquired the British company ARM for $32 billion.

If we compare ARM with x86, then the latter is positioned as a processor for resource-intensive tasks, as well as CISC (Complex Instruction Set Computing), i.e. instructions for all occasions are implemented, in contrast to RISC - a set of the minimum commands necessary for operation. The minus of x86 can be called energy consumption, respectively, the release of a large amount of heat, and the complexity of the commands. ARM - minimal power consumption, low price, and poor performance compared to x86. Recently, the line between both architectures has been blurred. ARM processors are becoming more productive and faster. In general, it is worth noting that these two architectures represent the main percentage of sales on the market.

The architecture has evolved over time and since ARMv7 3 profiles have been defined:
- 'A'(application) - for devices that require high performance (smartphones, tablets)
- 'R' (real time) - for real-time applications,
- 'M' (microcontroller) - for microcontrollers and low-cost embedded devices.
The M profile (ARMv6-M and ARMv7-M versions, Cortex-M cores) does not, strictly speaking, refer to "real" ARM processors. Firstly, it radically differs in system architecture from all other ARM developments, and, accordingly, at the system level it is incompatible with either earlier processors or with other profiles of the 7th version of the architecture. Secondly, these chips only implement the Thumb (ARMv6-M, Cortex-M0 and -M1 cores) or Thumb-2 (ARMv7-M, all other Cortex-M cores) instruction set, and ARM set instructions are not supported. This series is intended for use as microcontrollers of small and medium performance. Due to their low cost and power consumption, they can successfully compete with much weaker 8- and 16-bit microcontrollers in terms of computing capabilities. Note that the affiliation of the Cortex-M0 and -M1 cores to the 6th version of the architecture is purely formal. All documentation of interest can be downloaded on the official website https://developer.arm.com/
Looking ahead, we will work with the ARMv7E-M architecture, the Cortex-M core. In the process of working with him, we will study all the subtleties.
Below is a table of the Cortex-M family.

Finally, consider the set of instructions Thumb. This is the mode of ARM processors (starting with ARM7TDMI) where the abbreviated instruction set is used. Consists of 36 instructions taken from the standard 32-bit ARM architecture instruction set and converted to 16-bit codes, i.e. executes an alternate set of 16-bit instructions. The length of Thumb instructions is half the length of standard 32-bit instructions, which allows you to significantly reduce the required program memory (about 30%), as well as use cheaper 16-bit memory. When executed, these instructions are decoded by the processor into equivalent ARM operations performed in the same number of cycles. Shorter opcodes generally result in greater code density, although some opcodes require additional instructions. In situations where the memory port or bus width is limited to 16 bits, the shorter Thumb mode opcodes are much faster than regular 32-bit ARM opcodes, as less code has to be loaded into the processor with limited memory bandwidth. Thumb-2(which is a mixture of ARM and Thumb) is a technology that started with the ARM1156 core, announced in 2003. It extends the limited 16-bit Thumb instruction set with additional 32-bit instructions to give the instruction set extra width. The goal of Thumb-2 is to achieve Thumb-like code density and 32-bit ARM instruction set performance. We can say that in ARMv7 this goal was achieved. Thumb-2 expands both ARM and Thumb instructions with even more instructions. The Unified Assembly Language (UAL) supports the creation of instructions for both ARM and Thumb from the same source code. The ARMv7 versions of Thumb look like ARM code. All ARMv7 dies support the Thumb-2 instruction set, while some dies, like the Cortex-m3, only support Thumb-2. The remaining Cortex and ARM11 dies support both Thumb-2 and ARM instruction sets.

The ARM architecture has many versions, today (2017) the last one is ARMv8-A of the Cortex-A50 family, by the way, in the spring of 2017, ARM introduced two processor cores Cortex-A75 and Cortex-A55. You and I do not consider developments by third-party companies that own an architectural license from ARM, which allowed the implementation of proprietary instructions. We will get acquainted with the ARMv7E-M architecture on the Cortex-M4 core, working with the STM32F303VCT6 microcontroller on the STM32F3 Discovery development board. I wrote above about switching to arm for performance and functionality, but we will also broaden our horizons a bit, study new technology and learn how to integrate it into projects, combining it with other technologies. In the next post, we will get acquainted with the STM32F303VCT6 microcontroller, consider its architecture, and learn how to work with it. This is where we will stop today. All for now.

Liked the article? Share with friends: