Bonsoir Ada,
J’ai bien lu votre message et espère que quelqu’un pourra vous apporter des informations mais votre RV IRM approche,alors pourquoi ne pas profiter de l’instant présent ? Je sais que vous avez mal mais il n’est pas bon pour nous de stresser.
Bon courage et suis de tout coeur avec vous
Ping Pong scheme uses semaphores to pass dual port memory privileges
The dual-port memory is efficient to be used in embedded systems as long as it is used correctly. Dual-port random access memory (RAM) is outfitted with inner semaphores to assist processors read and write into the dual-port RAM. However, the use of a scheme which will ensure the safe passage of information from 1 processor to another can be necessary. The ping pong scheme utilizes three of the normal eight components semaphores and the two chips can have differing speed and power. The strategy permits a privilege to openly return and forth between the processors.
Numerous schemes for passing information between processors and dual-port memory are possible. But a ping pong scheme uses only three semaphores, uses no state data within the dual-port memory, and doesn’t depend on time.
During the last few years, the purchase price of dual-port memory has fallen to a level which makes it possible to be used in embedded systems. Employing a dual-port RAM sounds appealing, but you have to know how to use this RAM correctly. The two processors, one on both sides of this dual-port RAM, cannot only write and read to the dual-port RAM at any moment. To help the designers manage this issue, most dual-port RAMs have internal semaphores. Semaphores are flags that just one chip at a time may have. Looking to buy the best ping pong table: 8 Best Ping Pong Table Reviews & Top Rated Ping Pong Tables 2018.
These semaphores are only basic building blocks. You must also implement a scheme that allows information to safely pass from 1 processor to another. Several schemes are possible. In 1 scheme, the dual-port RAM itself could hold a state variable for use throughout the chips’ arbitration. Another strategy is to guarantee that another chip reads and modifies the information within a restricted time limit.
But a third potential scheme uses no state data within the dual-port RAM area and does not depend on time. This strategy utilizes three of the normal eight components semaphores, and also the two chips can differ in processing speed and power. The processors on each side pass through simple state machines with only one possible next state. This"ping pong" approach lets a privilege always pass back and forth between the chips.
Ping pong scheme passes privileges
This strategy refers to a privilege that passes between the chips. A processor can hold the privilege provided that it wants to. When a chip has the freedom, the processor is free to do anything it wants with the buffers. The strategy uses three semaphores (A, B, and C) to maneuver the privilege. Figure 2 displays a complete privilege-passing sequence.
Each chip has the privilege three occasions, causing no more than six synchronous transmissions. A processor that owns two semaphores gets the freedom. The processor which wants to pass the privilege to another chip does so by alerting the oldest owned semaphore.
It is possible to use any register- or protocol-based strategy to translate the buffers, and you can or cannot overwrite data. Also, the scheme protects any number of buffers. The strategy is quickly, because a chip has only to free one semaphore and poll for the next between any communication.The communicating does not deadlock, since the scheme acquires and releases the semaphores in proper order.
This simple solution was not easy to derive. You cannot use a single semaphore alone, since it merely protects the data and gives no synchronization or management indication. At a single-semaphore scheme, any chip, including the processor that just introduced a semaphore, could acquire and receive a released semaphore. In the same way, you can’t execute the communication with only two semaphores. But a scheme with three semaphores does not allow a processor obtain and release the same semaphore, get any 2 semaphores after each other, or release any 2 semaphores after every other. The sequence of processor 1 is"free-get-free-get-free-get" of semaphores A, B, and C.
You must assure power-up consistency between the 2 processors. You can accomplish this with a simultaneous obtain of the required semaphores plus a time-out until the sequence begins.
The next analogy helps to explain the scheme. Consider two individuals wanting to talk about with an ice-cream cone. They could utilize three balls to assist them share it, 1 red, one blue, and one green, all initially lying on a desk. They must agree on the chunk color sequence (r-b-g-r-b-g, etc), and that receives the first lick. Whoever stinks the ice hockey cone must hold two chunks.
A simulation case
The next Occam program simulates this ping pong scheme (Occam is a registered trademark of SGS-Thomson Microelectronics, previously, Inmos.) Occam-2 is a language which supports parallel processes, making the real time scheduler invisible and unreachable. The strongly typed language has a set of rules which, with all the absence of pointers and dynamic memory handling, make programming virtually self-evident. This language is small and easy to learn. Like the real-time parts of Ada, Occam-2 relies upon the CSP-notation (Communicating Sequential Processes, a formal theory developed by CAR Hoare). The following is the main part of the program: Find out ping pong paddles 2018.
The code listing is folded. All the bold-faced text lines beginning with three dots are all folds. This fold crease repeats as a heading in the place where the contents of this fold are present. Occam uses strict indenting of 2 spaces to define blocks of code.
Just one INT, whose privilege to own passes between the two chips, simulates the dual-port RAM’s data space. Occam supports stations (using the CHAN build ) and protocols (utilizing the PROTOCOL construct). All communication between concurrent processes (using the PAR build ) occurs over synchronous, unbuffered, unidirectional stations. Occam has no semaphores, because process encapsulation in servers share resources. Yet, the intention of this plan is to simulate a common buffer and semaphores. Thus, to make the shared buffer, you must break an Occam rule together with the #PRAGMA SHARED compiler directive.
Figure 3 shows that the command-flow diagram of the main application list. Each processor communicates with all the dual-port RAM through three control channels (one per semaphore), and the RAM replies over three reply stations (one per semaphore). This scheme corresponds to having a separate address for every query to a genuine dual-port RAM. This code defines the Occam protocols.
Both are easy protocols, but Occam also supports variant protocols, which can be user-defined protocol formats. The following code shows the timing aspect of Occam.
TIMER is a primitive data type, and also the fundamental unit is a tick (1 [[micro]sec] on high-priority procedures and 64 [[micro]sec] on low-priority procedures ). This process is essential for the optional time delay.
The most interesting factor in this code is that the CHAN parameters. Both control and reply are 2-D arrays of channels. The dimensions represent the 2 processors and three semaphores. The Occam compiler guarantees that there is only 1 sender and one receiver per channel.
The following code handles the chip queries. Observe that the question mark (?) Passively waits for data on a channel, along with the exclamation mark (!) Sends data within a station whenever a recipient is ready for the information: https://www.quora.com/profile/PongStart/Ping-Pong-Start-1/Best-Ping-Pong-Robot-Table-Tennis-Robot-And-How-To-Choose.
The above code implements a normal server, one that sits idly waiting for a control coming from any chip (PRI ALT p=0 FOR NoOfProcessors) and going to any semaphore (PRI ALT s0 FOR NoOfSema). The code actually implements waiting for six stations (2x3). The next command sets up six occasions: command [NextALT[de +processor]][s]? cmd. The code processes the initial received control. If the semaphore is currently in use, the DualPortRam answers a refusal. If the semaphore is free, the DualPortRam grants the semaphore and relocks it. The DualPortRam doesn’t know which chip is utilizing the semaphore; it understands just the binary state. Note that decimal points in Occam titles imply nothing more than an underscore in C names. For example,“answer” and"reply." Are two different names.
Whenever one processor was served, another processor is put in the ALT queue of passive waiting. With this explicit charge of the ALT equity, you have to introduce a delay in the chips, so that they can’t immediately ask again to get a semaphore. This repeat query would lead to the publishing semaphore query never to be served. With the acceptable scheduling, the processors don’t require this delay. No fantastic system design should rely on inserted repeated delays.
The following code repeatedly asks for a second semaphore. If the DualPortRam denies, the code moves into a waiting mode. This waiting is unnecessary. But you are able to look upon this time as time once the chip can do things apart from ping pong the information back and forth.
As you can see in this code, 1 processor has the time to function as dual-port RAM once/sec; the other, 10 times/sec. This means that the speediest processor will perform nine questions with a denial for every success. Total speed with no delay causes the buffer value to increment to 10,000 in 3 sec, for instance, first 1-sec delay.
Whenever a processor owns two semaphores, the processor can do anything it needs with the buffer. A system could manage several buffers via this three-semaphore scheme and could also assign directions to the semaphores. Together with three buffers, there could be one semaphore for each direction (such as command/reply) and one for bidirectional information (register-based). Our evaluation program tests to check whether another chip has incremented the buffer’s value by 1 and then increments the value and sends it on.
Figure 4, which reveals adjacent Processor and DualPort-Ram code, exemplifies a number of the communication elements.
Each the code is finished, is fully analyzed, and is working. (Code to report to the screen was stripped off.) The Occam code was analyzed on an SGS-Thomson transputer PC plug-in board. Occam is currently also available to nontransputer users. A system called SPOC (Southampton Portable Occam Compiler) creates ANSI-C. Also, a compiler named KROC (Kent Retargetable Occam Compiler) now produces code that runs on a Digital Equipment Alpha running OSF 3.0 along with a SPARC running SunOS/Solaris system. You might even conduct Occam on PCs under a DOS extender. For Additional information, try the next www websites:
Oyvind Teig is a senior development engineer in Autronica As (Trondheim, Norway). He works on the design and programming of real time systems and retains an MSC degree from the Norwegian Institute of Technology.