Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Mon, 10 Sep 2007 23:16:48 -0700, Peter Alfke <alfke@sbcglobal.net> wrote: >On Sep 10, 10:55 pm, richard.melik...@gmail.com wrote: >> Hello, >> >> Most books on digital design discuss Gray codes. However, most of the >> focus is on generating these codes, rather than detailing their uses. >> >> I read the Wikipedia article:http://en.wikipedia.org/wiki/Gray_code, >> but it doesn't provide enough in-depth information of the uses of Gray >> code in hardware. >> >Binary-coded counter sequences often change multiple bits on one count >transition. That can (will) lead to decoding glitches, especially when >counter values are compared for identity. >Gray-coded count sequences always change one, and only one, bit on >each transition. Comparing two such counters for identity will thus >never generate a decoding glitch. >That is the reason for Gray codes counters in FIFO addressing, where >FULL and EMPTYis established by comparing two asynchronously clocked >counters for identity. Gray code addressing is a must for asynchronous >FIFOs.. > >The "one bit change per transition" advantage occurs only in counters, >or under other very restrictive circumstances. I do not see an >advantage in general state machines, where sequences are not as >predictable. > >Gray-coded counters use on average half the number of transitions, >compared to binary counters. That's a dynamic power advantage. > >And yes, Gray is spelled with an a, since it is named for its >inventor. > >Anybody else have any comments? One typical use of this code is fotary encoders. The discs inside them are gray-encoded, so they are more rigid, and the encoder output is less glitchy. ZaraArticle: 124026
On 10 Sep., 18:13, Weng Tianxiang <wtx...@gmail.com> wrote: > Hi, > I want to know what is called carry chain structure in FPGA is called > in IC? It is called a carry chain. In most libraries it is part of the full adder cell. While in general it is good for performance to have skewed cells that provide one path through the adder that is a lot faster than the other passes, it turns out that many mappers do not handle that case well. Therefore often only symetric cells are available. (Which are good for multipliers). What the other posters mentioned actually are alternative carry structures that are not chains. Kolja SulimmaArticle: 124027
Hi, Yes, that's true. But what I can't place very low speed signals (like reset) next to LVDS? I don't know. But anyway I found solution for this and I can force fitter to place such signals next to LVDS pair. For such signal TOGGLE RATE must be defined to 0 MHz. Thx , Adam > Quartus will not allow a non-differential signals within I believe 4 pins of > a differential signal. The solution is to move the single-ended signal away > from the differential pair. The reason for this has to do with maintaining > the integrity of the differential signal. A single ended signal running > next to either the positive or negative end of a diff pair could induce > single ended noise; and diff signals are only good at rejecting common-mode > noise. > > Something else to beware of, depending on the device and whether the > differential signals are inputs or outputs, is that the bank voltage where > the differential signals reside may have to be powered by 2.5V. This may > negatively impact your single ended signal causing it not to meet the VINhi > levels of the device it is tied to. > > > > > "Górski Adam" > <gorskia@.................wp....................pl..................> wrote > in message news:fc3jmt$4s8$1@atlantis.news.tpi.pl... >> Hi, >> >> I'm working on design with LVDS signals and when I'm trying to place LVDS >> inputs at dedicated pins I got : >> >> Error: Non-differential I/O pin addr[8] in pin location 86 and pad 103 too >> close to differential I/O pin clk_pll(n) in pin location 90 and pad 107 -- >> pins must be separated by a minimum of 4 pads >> >> >> And I don't know why ? How can it be too close ? Is any workaround for >> this ? (CYCLON II 2C5) >> >> Adam > >Article: 124028
Górski Adam pisze: > Hi, > > Yes, that's true. > But what I can't place very low speed signals (like reset) next to LVDS? > But *why* I can't place very low speed signals (like reset) next to LVDS? > I don't know. > I don't know. > > But anyway I found solution for this and I can force fitter to place > such signals next to LVDS pair. > > For such signal TOGGLE RATE must be defined to 0 MHz. > > Thx , > > Adam > >> Quartus will not allow a non-differential signals within I believe 4 >> pins of a differential signal. The solution is to move the >> single-ended signal away from the differential pair. The reason for >> this has to do with maintaining the integrity of the differential >> signal. A single ended signal running next to either the positive or >> negative end of a diff pair could induce single ended noise; and diff >> signals are only good at rejecting common-mode noise. >> >> Something else to beware of, depending on the device and whether the >> differential signals are inputs or outputs, is that the bank voltage >> where the differential signals reside may have to be powered by 2.5V. >> This may negatively impact your single ended signal causing it not to >> meet the VINhi levels of the device it is tied to. >> >> >> >> >> "Górski Adam" >> <gorskia@.................wp....................pl..................> >> wrote in message news:fc3jmt$4s8$1@atlantis.news.tpi.pl... >>> Hi, >>> >>> I'm working on design with LVDS signals and when I'm trying to place >>> LVDS inputs at dedicated pins I got : >>> >>> Error: Non-differential I/O pin addr[8] in pin location 86 and pad >>> 103 too close to differential I/O pin clk_pll(n) in pin location 90 >>> and pad 107 -- pins must be separated by a minimum of 4 pads >>> >>> >>> And I don't know why ? How can it be too close ? Is any workaround >>> for this ? (CYCLON II 2C5) >>> >>> Adam >> >>Article: 124029
Kevin Neilson <kevin_neilson@removethiscomcast.net> writes: > Symon wrote: >> Dear All, >> I found this online and found it useful. Maybe you will too. >> http://incompetech.com/graphpaper/ >> Cheers, Syms. >> >> > Where are the Smith charts? I worked with an RF engineer once who was > still plotting Smith charts by hand and he said he had to order pads > from Smith's widow, who apparently owned some patent on the pattern. This one: http://www.pdfpad.com/smithchart/ has this statement at the bottom of the webpage: "The Smith chart is copyrighted by Analog Instruments Company and Anita Smith, executrix of the Estate of Phillip H. Smith. Used with permission." Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.net/electronics.htmlArticle: 124030
<richard.melikson@gmail.com> wrote in message news:1189490124.268974.298140@o80g2000hse.googlegroups.com... > Hello, > > Most books on digital design discuss Gray codes. However, most of the > focus is on generating these codes, rather than detailing their uses. > > I read the Wikipedia article: http://en.wikipedia.org/wiki/Gray_code, > but it doesn't provide enough in-depth information of the uses of Gray > code in hardware. > > I know Gray codes are used for: > > 1) Encoding state machine states. Why is it an advantage to use Gray > codes here ? > 2) Async FIFO addressing between clock domains. Could anyone elaborate > on this ? > 3) Error correction in digital communications. Again, I'd love to hear > some more details about this. > > In general, what are the other uses of these codes? When was the last > time you needed Gray codes in your design and for what purpose ? > > R > We had four output pins. My software was dieing somewhere round its polling loop,so at each block entry, I changed one of the four bits. Guaranteed the 'scope had the right picture.Article: 124031
On Tue, 11 Sep 2007 05:55:24 -0000, richard.melikson@gmail.com wrote: >Hello, > >Most books on digital design discuss Gray codes. However, most of the >focus is on generating these codes, rather than detailing their uses. > >I read the Wikipedia article: http://en.wikipedia.org/wiki/Gray_code, >but it doesn't provide enough in-depth information of the uses of Gray >code in hardware. > >I know Gray codes are used for: > >1) Encoding state machine states. Why is it an advantage to use Gray >codes here ? I haven't read the wiki page, so I don't have the fuller context for the phrase you quote here. But its first patenting by Frank Gray in 1953 was for shaft encoders. Emile Baudot discovered the idea much earlier, and certainly used it in 1878 in telegraphy, but Gray's patent application (1947?) said that his patented code had "as yet no recognized name." So that was that. In the case of shaft encoders (or actually anything trying to provide angular position information), it's a big help. Things wear and get old, bits of metal flake off or get cruddy, the contacts aren't evenly loaded, someone is turning the handle by hand, let alone any manufacturing tolerances in building the unit. By Gray coding, you can avoid the confluence of where several contact conditions change at once to represent what is really just a rather minor rotational change. The Gray code only has at most nearby contact change at any angular position. So it either changes, or it doesn't. And it if changes, it still only means a small angular difference, so if it's no longer perfect it's a smaller harm to what is going on than if this had been encoded other ways. >2) Async FIFO addressing between clock domains. Could anyone elaborate >on this ? I don't know, not having worried about this. But I assume this refers to, in saying 'async', that the arrival times of different signals aren't necessarily exactly the same and if you have some device monitoring an encoded input and taking some action on that basis, it would be better that as each bit arrives that the functional change is to a "nearby" function (more gradual-appearing in some way.) But this is just a random guess from me. >3) Error correction in digital communications. Again, I'd love to hear >some more details about this. Now, that is actually a very interesting area. Since I have your interest..... Let's look at the simple case of just two binary bits: Bin Gray 00 00 01 01 10 11 11 10 As you know already, the Gray codes only have one bit changing between adjacent entries (or wrapping back to the top from the bottom.) However, this isn't the only two bit Gray code. There are seven others. Here's another one just as "Gray" as the one above: Bin Gray 00 11 01 10 10 00 11 01 Same thing, really. I just inverted the bits. But let's imagine we want to "visualize" the way to find all possible Gray coding sequences. To do that, imagine a 2D square. Place the "00" symbol at one vertex. Place "01" and "10" at the two vertices which are directly connected to that vertex. Now place "11" at the diagonally opposite corner. Now, that places all four possibilities, but does so in a particular fashion. From this square: 00 ----- 01 | | | | | | 10 ----- 11 it's easy to see that to produce all eight possible Gray code sequences for two binary bits, you just select any starting vertex and then go either clockwise or counterclockwise around the square, listing the visited corners out, until you get back to the original corner. Since you have two directions to travel in and four possible starting points, you should see that there are exactly eight possible Gray code arrangements for two bits -- each equally valid. For three bits, it gets a little more interesting. But by extension, it's just a cube we need. Now you put "000" on one of the corners. Then place "001", "010" and "100" on the vertices which follow directly along one of the three edges leaving that corner. You can finish up the rest by keeping in mind that moving along an edge can only allow one bit to change in the vertex you then arrive at, from the one you left to get there. The cube might look like: 100 ----- 101 /| /| / | / | / | / | 000 ----- 001 | | | | | | 110 ---|--111 | / | / | / | / |/ |/ 010 ----- 011 Now if you place your mental finger at some starting vertex and then imagine trying to trace out all possible paths that will touch each other vertex exactly once and then bring you back to the original vertex, you will find that the number of possible paths is 3*2*(1*2+1*1) or 18. And that's just for one vertex selected at random. Since there are 8 of them, the total is then 8*18 or 144 possible 3-bit gray code sequences. All these different paths are called Hamiltonian "walks" or "cycles." In general, any particular N-bit Gray code sequence corresponds to one of the many possible Hamiltonian cycles on the corresponding N-dimensional hypercube. If you have any object with (m) vertices in N-d space, the general problem of devising a path that visits every vertex once and only once is called the Hamiltonian circuit problem (after William Rowan Hamilton, who studied the problem on the vertices of a dodecahedron.) Of course, Gray codes are concerned with a special case, that of N-dimensional hypercubes, and not things like dodecahedrons. But the idea is similar and Gray codes are Hamiltonian cycles explicitly dealing with hypercubes. Since you asked about error correction and digital coding, this then brings me to the idea of a Hamming distance of "1." A Hamming distance of 1 would be the distance between the symbol "000" and symbol "001" or "010" or "100" on this cube. In other words, only requiring the traversal of one edge on a Gray coded n-dim hypercube. I was going to go on a bit, but I decided to see if there was a Wiki on the subject of Hamming distance and there was! In fact, it includes the darned cube I cobbled up above (kind of.) So that is even better. See: http://en.wikipedia.org/wiki/Hamming_distance If you imagine the Hamming distance as defined there, and let's say you have For error detection, one possibility (when single-bit errors are pretty likely) of what you'd like to do is to have a "space" (one of these vast N-Dim hypercube things) that is larger than your need of valid symbols. For example, you might need to represent 0-999 and decide to use 16 bits for this, so your hypercube is a 16-dimensional monstrosity, but basically a "Gray coded" hypercube. What you want then is to distribute your valid symbols throughout the vertices in such a way as to maximize the Hamming distance between each of them. That way, it's less likely that a single bit error will be detected as valid. In fact, if the Hamming distance is at least 2 between any two of your valid symbols, a single bit error cannot "reach" a valid symbol so you will catch the error. There are many other ways this gets used, by the way. And it isn't all that hard to "see" this. You may have heard of Hamming error correction codes. It's the idea behind ECC memory, for example, where it not only detects, but also corrects, all single-bit errors -- and it may detect many multi-bit errors. Anyway, let's take a very simple example of how they work. The Hamming [4,7] code. Remember the Venn diagram? http://en.wikipedia.org/wiki/Venn_diagram The one with three overlapping circles, which overlap enough so that there is a small region in the center which is common to all three circles? (Check out the Wiki page, there above.) But here is my dumb ASCII of it: aaaaa ccccc aaa aaa ccc ccc aa ** cc a c a c a c a c a c 6 a c a 1 c a 2 c a c eeeee a c a c eee eeea c a *e *e c a ec a e c a e c 7 a e c a e c a e c a e c a e c a e c a e c a e 4 c a 5 ec a* ** c* eaaa aaa ccc ccc e e aaaaa ccccc e e e e e e 3 e e e ee ee eee eee eeeee The overlapping circles are lettered with 'a', 'c', and 'e'. In their overlapping arrangement, they create 7 regions. I've numbered those from 1 to 7, as you can see. Now, we assign one bit of a seven-bit word to each of these regions. The data bits we want to send (or receive) will be placed into the four regions numbered 4, 5, 6, and 7. Four data bits, total. The error correction bits will be assigned to regions 1, 2, and 3. Those values will be calculated from the data bits. How to calculate them? First place your data bits into the four regions, 4, 5, 6, and 7. Then consider the sum of those data bits which are in circle 'a' (regions 4, 6, and 7) and force the bit in region 1 to be a parity bit (even or odd, your choice.) So the bit placed into region 1 is determined. Now do the same for the bits in circle 'c' (regions 2, 6, and 7) and force the bit in region 2 to be a parity bit for those, as well. Then do the same for circle 'e' (regions 4, 5, and 7) and compute that parity for region 3. All 7 bits are now determined. Keep in mind that regions 1, 2, and 3 are only one parity bit, so if you get a carry in adding up the regions just toss the carry away. Now, you send these seven bits to a receiver somewhere. Imagine there is a 1 bit error in any of these. If the one bit error occurs in region 7 (a data bit), then all three checksums or outer region data bits will fail. You see that? All three included region 7 in calculating their parity value, so if region 7 gets hit and has an error, then all three of the parity bits will be wrong. So if the receiver sees all three parity values are wrong, it knows that the bit in region 7 was incorrectly received. Imagine an error in the bit for region 4 happens. Now, the parity in regions 1 and 3 go wrong. So the receiver can tell that region 4 must have been received in error. Imagine that an error is in the bit for region 2. (That's a parity bit and not data, at all.) In this case, both of the other parity regions, 1 and 3, are still correct. So that means it was just a parity bit that went wrong. The table looks like: Bit Error Region Parity 1 Parity 2 Parity 3 none ok ok ok 1 err ok ok 2 ok err ok 3 ok ok err 4 err ok err 5 ok err err 6 err err ok 7 err err err As you can see, the receiver has enough information to detect AND correct any single-bit error that may take place. This idea can easily be expanded into more Hamming codes, such as the [11,15] or [26,31], or [57,63] codes. For example, 57 data bits can be sent using 63 bits total (only 6 correction/detection bits.) There's a great chapter on Gray code in Martin Gardner's book, "Knotted Doughnuts" (Freeman, 1986.) The chapter is called, "The Binary Gray Code." On p. 13, Gardner writes: "A Gray code for three-digit binary numbers had 2^3 = 8 numbers that can be placed on the corners of a cube. Adjacent corners have binary triplets that differ in only one place. Any continuous path that visits every corner once only generates a Gray code. For example, the path shown by the dashed line starting at 000 produces 000, 001, 011, 101, 100. [Note: his figure of the cube has these numbers on the corners.] This is a cyclic code because the path can return from 100 to 000 in one step. Such paths are called Hamiltonian paths after the Irish mathematician William Rowan Hamilton. As the reader has probably guessed, binary Gray codes correspond to Hamiltonian paths on the cubes of n dimensions." ...and he goes on to say later... "Gray codes for other bases correspond to Hamiltonian paths on more complicated n-dimensional graphs. The number of Gray codes for any base increases explosively as the number of digits increases. The number of Gray codes, even for the binary system, is known for four or fewer digits." Gardner had written this in 1976 in his *Scientific American* "Mathematical Games" column. For publication in this book he added several pages of further developments. He also provides an extensive bibliography on the Gray codes and related ideas. As of 1986 the 5-d cube Hamiltonian path numbers were known, and thus the number of 5-bit Gray codes. A table for these are as follows: N Vertices Distinct paths Total ---------------------------------------------------------- 2 2^2 (4) 2 8 3 2^3 (8) 18 144 4 2^4 (16) 5712 91392 5 2^5 (32) 5,859,364,320 187,499,638,240 Gardner cites a reference for this data: "At the 1980 IEEE International Conference on Circuits and Computers, at Port Chester, N.Y., a paper was presented titled 'Gray Codes: Improved Upper bounds and Statistical Estimates for n > 4 bits,' and published in 1983. The authors were Jerry Silverman, Virgil E. Vickers and John L. Sampson, electrical engineers at the Rome Air Development Center, Hanscom Air Force Base, Chicopee, Mass." There's many other closely related studies, such as "sphere packing" in N-dimensions. You might want to look up a paper by Marcel Golay, from 1949. His coding is particularly useful for noisy environments and places a smaller set of valid Golay coded symbols in a much larger symbol space, with the Hamming distance between each of them both relatively uniform and as large as uniformly possible. I believe that Hamming and Shannon worked close by each other for some years. You might also want to read C. E. Shannon's, "A Mathematical Theory of Communication," The Bell System Technical Journal, July and October of 1948. It's really a fairly easy to read and gives you a nifty segue as well into Boltzmann's discussion on the relationship of temperature and energy and the concept of entropy. (Another guy, Weaver, later pressed Shannon to reach for a wider audience and together they did a paper in 1949.) >In general, what are the other uses of these codes? When was the last >time you needed Gray codes in your design and for what purpose ? That's a big question. Start with the above and see if that helps. JonArticle: 124032
Hello. I've used the Xilinx core generator MIG module to generate a DDR2 SDRAM controller. The design supports generics for the ram timings. E.g. 15000 ps for the precharge-command delay (tRP). When I simulate the interface and measure the tRP it is much longer (in my case 26350 ps). This problem also occurs with all other timings e.g. ras-to-cas delay (tRCD) should be also 15000 ps (generic value) but I measured 33750 ps. These long timing values reduce the maximum throughput for short write and read burst extremely. Does anybody know why the interface values differ from the generic values? thx DaMicha.Article: 124033
On Tue, 11 Sep 2007 11:42:46 +0200, David Brown <david@westcontrol.removethisbit.com> wrote: ><snip> >I hope you copied-and-pasted a lot of that post rather than writing it >all yourself! ><snip> Nah, I type fast. ;) The words are mine except what I quoted from Gardner. A small portion of it does come from an earlier post I'd written, though. JonArticle: 124034
Jonathan Kirwan wrote: <snip huge and informative post> I hope you copied-and-pasted a lot of that post rather than writing it all yourself! I hadn't thought about using Gray code for communication, but it's a really smart idea. Supposing you want to send 10-bit data within a 16-bit message. All you need to do to send data "x" is to calculate: y = toGray16(x << 6) At the other side, you receive "z". To get your best estimate for the original "x", you calculate: x' = (fromGray16(z) + 0x1f) >> 6 xe = z - (x1' << 6) xe is the Hamming distance between the sent data (assuming the corruption is not too bad, obviously) and the received data, while x' is the nearest correct code. mvh., DavidArticle: 124035
On Tue, 11 Sep 2007 01:52:08 -0700, Jonathan Kirwan <jkirwan@easystreet.com> wrote: >The Gray code only has at most nearby contact change at any >angular position. I meant, "The Gray code only has at most _one_ nearby contact change at any angular position." JonArticle: 124036
On Sep 11, 4:52 pm, Jonathan Kirwan <jkir...@easystreet.com> wrote: > On Tue, 11 Sep 2007 05:55:24 -0000, richard.melik...@gmail.com wrote: > >Hello, > > >Most books on digital design discuss Gray codes. However, most of the > >focus is on generating these codes, rather than detailing their uses. > <snip> > >2) Async FIFO addressing between clock domains. Could anyone elaborate > >on this ? > > I don't know, not having worried about this. But I assume this refers > to, in saying 'async', that the arrival times of different signals > aren't necessarily exactly the same and if you have some device > monitoring an encoded input and taking some action on that basis, it > would be better that as each bit arrives that the functional change is > to a "nearby" function (more gradual-appearing in some way.) But this > is just a random guess from me. > This technique is useful if you access RAM/ROM linearly as a FIFO. Say for example a video frame buffer. Using gray code instead of straight low to high addressing allows you to stream the data into or out of memory without using a latch. This saves one clock cycle per unit data but considering that data read/write with a latch is two clock cycles (1.change address, 2.latch data) it achieves 100% improvement in throughput.Article: 124037
Hi Peter, thanks for replying. My comments below... > Binary-coded counter sequences often change multiple bits on one count > transition. That can (will) lead to decoding glitches, especially when > counter values are compared for identity. What do you mean by "compared for identity" - do you mean equality of two multi-bit registers ? Also, what kind of glitches are you referring to here ? Logic hazards ? > Gray-coded count sequences always change one, and only one, bit on > each transition. Comparing two such counters for identity will thus > never generate a decoding glitch. > That is the reason for Gray codes counters in FIFO addressing, where > FULL and EMPTYis established by comparing two asynchronously clocked > counters for identity. Gray code addressing is a must for asynchronous > FIFOs.. > Why can't the values just be synchronized (double FF) between the domains with a handshake and solve the problem of glitches in the first place ? > The "one bit change per transition" advantage occurs only in counters, > or under other very restrictive circumstances. I do not see an > advantage in general state machines, where sequences are not as > predictable. So why do all synthesis tools propose "gray code" as one of the encodings of state machines ? RArticle: 124038
> It depends on the state machine. One's with complex state-state jumps, > will find it hard to follow a Gray pattern, but if the state engine > is Spin-Phase-Sync in nature (ie simple circular states, with 'waits' > and decodes ), then Gray can work well. > What do you mean "can work well". Won't normal encoding work just as well ? I can see an advantage in one-hot encoding, but not Gray. > <snip> > > > In general, what are the other uses of these codes? When was the last > > time you needed Gray codes in your design and for what purpose ? > > Gray codes are also common in Absolute Rotary encoders. > You could also argue that quadrature encoders, and Stepper Motor > drive, are special 2 bit cases of Gray counters. > Could you elaborate on this ? Thanks for the information, btw RArticle: 124039
richard.melikson@gmail.com wrote: >>It depends on the state machine. One's with complex state-state jumps, >>will find it hard to follow a Gray pattern, but if the state engine >>is Spin-Phase-Sync in nature (ie simple circular states, with 'waits' >>and decodes ), then Gray can work well. >> > > > What do you mean "can work well". Won't normal encoding work just as > well ? I can see an advantage in one-hot encoding, but not Gray. Gray is useful if you need to decode the states, and also if the state-next terms are async, as you have just ONE dependant FF active, so you cannot get a false state, due to aperture effects. With one-hot, you must ensure trigger signals are fully sync. > >><snip> >> >>>In general, what are the other uses of these codes? When was the last >>>time you needed Gray codes in your design and for what purpose ? >> >>Gray codes are also common in Absolute Rotary encoders. >>You could also argue that quadrature encoders, and Stepper Motor >>drive, are special 2 bit cases of Gray counters. >> > > > Could you elaborate on this ? On what,the Absolute encoder use, or the Quadrature == 2 bit gray ? Just draw a 2 bit gray counter, and then make it UpDown, and you will see the result. -jgArticle: 124040
Jonathan, Wow - an amazing post, very informative. Personally I'm familiar with most of this material, but it will definitely serve as a great placeholder for searches on the subject. Google is great in indexing such information reservoirs. Anyway, although the topic of error correction codes using Hamming distances is enlightening, I don't see how Gray codes help. Maybe it's just morning dense-ness, but it appears that these topics are tangential. > >In general, what are the other uses of these codes? When was the last > >time you needed Gray codes in your design and for what purpose ? > > That's a big question. Start with the above and see if that helps. I didn't mean it as a big question. It's quite simple, really - when was the last time *you* Jonathan (and other readers interested in sharing) used Gray codes in digital design, either in coding logic or software ? TIA RArticle: 124041
richard.melikson@gmail.com wrote: > Hi Peter, thanks for replying. My comments below... > > >>Binary-coded counter sequences often change multiple bits on one count >>transition. That can (will) lead to decoding glitches, especially when >>counter values are compared for identity. > > > What do you mean by "compared for identity" - do you mean equality of > two multi-bit registers ? yes, > Also, what kind of glitches are you referring to here ? Logic > hazards ? > > >>Gray-coded count sequences always change one, and only one, bit on >>each transition. Comparing two such counters for identity will thus >>never generate a decoding glitch. >>That is the reason for Gray codes counters in FIFO addressing, where >>FULL and EMPTYis established by comparing two asynchronously clocked >>counters for identity. Gray code addressing is a must for asynchronous >>FIFOs.. >> > > > Why can't the values just be synchronized (double FF) between the > domains with a handshake and solve the problem of glitches in the > first place ? They could be, but that would add latencies to the FIFO that are best avoided. ie why compromise the operation, when a Gray Code design will work better ? -jgArticle: 124042
> This technique is useful if you access RAM/ROM linearly as a FIFO. Say > for example a video frame buffer. Using gray code instead of straight > low to high addressing allows you to stream the data into or out of > memory without using a latch. Could you elaborate on this, please - how does it avoid a latch, and where does the latch come from anyway in straight addressing ? RArticle: 124043
On Tue, 11 Sep 2007 10:37:30 -0000, richard.melikson@gmail.com wrote: ><snip> >> That's a big question. Start with the above and see if that helps. > >I didn't mean it as a big question. It's quite simple, really - when >was the last time *you* Jonathan (and other readers interested in >sharing) used Gray codes in digital design, either in coding logic or >software ? 2005, for an EEPROM counter. JonArticle: 124044
>> Why can't the values just be synchronized (double FF) between the >> domains with a handshake and solve the problem of glitches in the >> first place ? > >They could be, but that would add latencies to the FIFO that >are best avoided. ie why compromise the operation, when a Gray >Code design will work better ? It's more complicated than that. Running a multi-bit value through a synchronizer doesn't work if more than one bit changes. Some of the bits might get there in time when others don't. If the data changes from 000 to 111, you might see 001, 010, 100, 011, 101, or 110 for a cycle before it settles. (That's assuming it was stable at 000 for several cycles and then stays at 111 for several cycles.) -- These are my opinions, not necessarily my employer's. I hate spam.Article: 124045
Hi, Do you have the same clock period in the simulation as you have specified for the MIG core? Göran <damicha@gmx.de> wrote in message news:1189502099.236300.141070@g4g2000hsf.googlegroups.com... > Hello. > > I've used the Xilinx core generator MIG module to generate a DDR2 > SDRAM controller. The design supports generics for the ram timings. > E.g. 15000 ps for the precharge-command delay (tRP). When I simulate > the interface and measure the tRP it is much longer (in my case 26350 > ps). This problem also occurs with all other timings e.g. ras-to-cas > delay (tRCD) should be also 15000 ps (generic value) but I measured > 33750 ps. > > These long timing values reduce the maximum throughput for short write > and read burst extremely. > Does anybody know why the interface values differ from the generic > values? > > thx DaMicha. >Article: 124046
hi i have a question. is there a xbd file (for edk) for the hydraxc available? and where can i find some documentation besides the hydraxc web page. (because there is not much on there). perhaps some docs or sample codes? thanks urbanArticle: 124047
> >They could be, but that would add latencies to the FIFO that > >are best avoided. ie why compromise the operation, when a Gray > >Code design will work better ? > > It's more complicated than that. Running a multi-bit value > through a synchronizer doesn't work if more than one bit changes. > Some of the bits might get there in time when others don't. > > If the data changes from 000 to 111, you might see > 001, 010, 100, 011, 101, or 110 for a cycle before > it settles. (That's assuming it was stable at 000 > for several cycles and then stays at 111 for several > cycles.) Ah - this is a most important point, I think. So, you imply that if I want to share a counter between two clock domains, I can so a simple double FF synchronization on it if it's encoded in Gray, whilst for a normal counter I need a handshake protocol with a synchronized control signal ? I think I understand how this can make things faster. RArticle: 124048
> > What do you mean "can work well". Won't normal encoding work just as > > well ? I can see an advantage in one-hot encoding, but not Gray. > > Gray is useful if you need to decode the states, What does this mean ? How is Gray better than regular binary when you need to decode the states ? RArticle: 124049
> >I didn't mean it as a big question. It's quite simple, really - when > >was the last time *you* Jonathan (and other readers interested in > >sharing) used Gray codes in digital design, either in coding logic or > >software ? > > 2005, for an EEPROM counter. > An internal address counter for access to the EEPROM ? Why did you choose Gray encoding in this case ? R
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z