Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Brian Philofsky <brian.philofsky@no_xilinx_spam.com> writes: > [...] > If you did not use any translate_off / ons in the code, you should > have seen errors during either XST or NGDBUILD as it would have > identified the incorrect parameter and properly warned but if those > are in the code, the tools ignore them as you are instructing with > those commands. This is not my experience. In the code below, the generic associations marked with XXX are clearly wrong but go unnoticed. Also, I was unsuccessful in setting the "PERIOD" attribute from VHDL (also marked with XXX in the code). Xst says: Timing constraint: NET clkgen_clk_100_buf PERIOD = 10 nS HIGH 5 nS But par does not see this constraint, I think. If I put the constraint in a .ucf file, then par does see it. Hmmm. Thanks a lot, Brian! Here is the code fragment: -- clock_generator -- generates deskewed system clocks and handles reset library ieee; use ieee.std_logic_1164; library unisim; use unisim.vcomponents.all; use work.common.all; entity clock_generator is generic ( clk_var_mult : positive; -- Frequency multiplier for variable clock clk_var_div : positive); -- Frequency divider port ( -- external connections clk_100_in : in u1; -- 100MHz Clock from external oscillator clk_sd_out : out u1; -- 100MHz to SDRAM clk_sd_in : in u1; -- 100MHz feedback from SDRAM reset_in : in u1; -- Reset input from switch -- global clock and reset clk_100 : out u1; -- Deskewed 100MHz clock, suitable for the -- SDRAM controller, for example. clk_var : out u1; -- Variable frequency clock, see generics. reset : out u1); -- Reset end entity; architecture syn of clock_generator is signal clk_sd_in_buf, clk_100_deskewed, clk_100_buf : u1; signal clk_var_unbuf, clk_var_buf : u1; signal locked : u1; attribute period : string; attribute period of clk_100 : signal is "10 ns"; -- XXX - no effect attribute period of clk_var : signal is "40 ns"; -- XXX - no effect begin -- The 100MHz clock is directly routed to the SDRAM. The clock for -- internal logic is deskewed with respect to the clock coming back -- from the SDRAM. clk_sd_out <= clk_100_in; inbuf: IBUFG port map ( i => clk_sd_in, o => clk_sd_in_buf); deskew: DCM generic map ( clkfx_mult => clk_var_mult, -- XXX - no error for this clkfx_div => clk_var_div, -- XXX - no error for this hi_xst_how_is_your_day => 0, -- XXX - no error for this clkin_period => 10.0) port map ( rst => reset_in, clkin => clk_sd_in_buf, clkfb => clk_100_buf, clk0 => clk_100_deskewed, clkfx => clk_var_unbuf, locked => locked); buf_clk_100: BUFG port map ( i => clk_100_deskewed, o => clk_100_buf); buf_clk_var: BUFG port map ( i => clk_var_unbuf, o => clk_var_buf); clk_100 <= clk_100_buf; clk_var <= clk_var_buf; reset <= not locked; end syn; -- GPG: D5D4E405 - 2F9B BCCC 8527 692A 04E3 331E FAF8 226A D5D4 E405Article: 80851
Jedi wrote: > Rudolf Usbrowser wrote: >> ezpcb.com wrote: >> >> A few weeks ago we had pcbexpress make 25 proto boards for us. >> 4 layers, 8.5 cm x 13 cm, 7 mil t/s, silk screen on both sides, >> fr4 ... Total cost was $650, including international UPS shipping. >> > > 25 pieces of 4-layer boards this size incl. 2nd silkscreen > costs around 600 US$ at eurocircuits. > > And that is including VAT and shipping costs... > > > rick Problem is eurocircuits web site is useless as it only works with Internet Explorer 5.5 or higher (try to get a quote with Firefox). Any provider that forces me to use a certain web browser (IE) or operating system (Windows) is out of the question. Regards, rudiArticle: 80852
valentin tihomirov wrote: > I think the OP asked for a proper design demo on the single signal path > example. The OP asked: > My question is if I use your advice, I would be registering 'OUT' at > each level, is that what I should do? The consensus of the replies was, yes if all else is equal, registering the outputs of all design entities (modules) is a good thing. Yes, I passed on a 5 level hierarchy example. I prefer a two level hierarchy where the the top level contains only instances and wires. This eliminates the concern about how a level 5 output escapes to the top. -- Mike TreselerArticle: 80853
On Sat, 12 Mar 2005 16:36:31 +0300, "Maxim S. Shatskih" <maxim@storagecraft.com> wrote: >> The start bits help to resynchronize. If the data flow isn't continuous but > >Exactly. Also note that, with serial line (unlike Ethernet, USB etc), the >pauses between bytes can be of arbitrary time. Only the timings between bits in >a byte are established. > >Serial line has no notion of the "packet". Actually with 10mb/s Ethernet and USB (all speeds) there is no carrier on the wire when actual data is not begin transmitted. Clock recovery starts at an arbitraty point with every packet.Article: 80854
"Clay S. Turner" <Physics@Bellsouth.net> wrote in news:m6sVd.42269$Rl5.37353@bignews4.bellsouth.net: > > Way Cool Al! > > "Al Clark" <dsp@danvillesignal.com> wrote in message > news:Xns960D8FBD5DE97aclarkdanvillesignal@66.133.129.71... >> Analog Devices, Altera and Danville Signal joined forces to create >> the ADDS-21261/Cyclone DSP & FPGA Evaluation Package featuring >> Danville's new dspstak 21261zx DSP Engine and dspstak c96k46 I/O >> Module. >> > > Thanks, Clay We showed it earlier this week at ESC. -- Al Clark Danville Signal Processing, Inc. -------------------------------------------------------------------- Purveyors of Fine DSP Hardware and other Cool Stuff Available at http://www.danvillesignal.comArticle: 80855
> > Serial line has no notion of the "packet". > > > Packetization is an artificial thing. Normally, the physical data channels > are continuous and bear no idea of any data packetization. Oh no. The Ethernet packet cannot have pauses between bytes, it is transmitted as the single entity time-wise. Same are USB packets, though they are smaller. Serial UART has no notion of such mode at all. This is why it is called "async" in Cisco IOS. There are also some serial attachment cables which have the notion of the packet (transmitted as single entity time-wise, I also expect this interface to have packet headers). They were used in X.25 equipment, and are incompatible with UART cabling. Cisco IOS called this interface "serial". The PPP protocol requires the packet-based underlying physical media. So, to run PPP over the UART line (which means - usual modem) - escape bytes are used to denote the packet boundaries. It is described in the PPP RFC as appendix. In Windows, this logic is in AsyncMac.sys which "packetizes" the serial UART line. >Packets are just > a way to transfer data in portions, which is most useful in the channels, > where the data can be lost or corrupted. Another very, very major purpose of packetization is time-sharing the single physical media line for several traffic flows. > requested for retransmission. Another way to combat data corruption is to > add the redundancy into the data itself, by some forward error correction > (FEC) mechanism. Most network media do not use ECC, they use the stupid checksums. The recovery is done via retransmission. ECC is used in storage, not networking. -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation maxim@storagecraft.com http://www.storagecraft.comArticle: 80856
> Actually with 10mb/s Ethernet and USB (all speeds) there is no carrier > on the wire when actual data is not begin transmitted. Same as with serial UART. >Clock recovery > starts at an arbitraty point with every packet. USB packets have a preamble to synchronize the PLLs. IIRC Ethernet too. -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation maxim@storagecraft.com http://www.storagecraft.comArticle: 80857
In ISE 6.2i, I compile a design off a mapped drive at a foreign site(ie I did not install the software and do not have administrative privileges). For example, I try to run Map, sometimes it completes with a green check mark, and sometimes it completes with no green check mark. Sometimes when it compiles with a green check mark, and I hit the Map Report, it reruns all the tools again (like xst, ngcbuild, map) as though a source hdl file had been changed. but no source files have been changed. Its maybe like the filesystems are running off two different timestamp clocks? Any ideas? -NewmanArticle: 80858
On Sat, 12 Mar 2005 18:52:24 +0000, newman wrote: > In ISE 6.2i, I compile a design off a mapped drive at a foreign site(ie I > did not install the software and do not have administrative privileges). > For example, I try to run Map, sometimes it completes with a green check > mark, and sometimes it completes with no green check mark. Sometimes when > it compiles with a green check mark, and I hit the Map Report, it reruns all > the tools again (like xst, ngcbuild, map) as though a source hdl file had > been changed. but no source files have been changed. Its maybe like the > filesystems are running off two different timestamp clocks? > > Any ideas? > > -Newman The timestamp thing can be a problem for make-based dependency checking (which ISE uses). But if all the design files are stored on the same computer, I would think that the timestamp would be consistent across all of them. As a practical matter, would it be possible for you to simply check the time on all relevant computers? The first step to corrective action is to determine what the problem is. Inconsistent time is one hypothesis, now you should test it. If the time is consistent, then you will have to come up with a new hypothesis. --MacArticle: 80859
"Mac" <foo@bar.net> wrote in message news:pan.2005.03.12.19.14.14.656152@bar.net... > On Sat, 12 Mar 2005 18:52:24 +0000, newman wrote: > >> In ISE 6.2i, I compile a design off a mapped drive at a foreign site(ie I >> did not install the software and do not have administrative privileges). >> For example, I try to run Map, sometimes it completes with a green check >> mark, and sometimes it completes with no green check mark. Sometimes >> when >> it compiles with a green check mark, and I hit the Map Report, it reruns >> all >> the tools again (like xst, ngcbuild, map) as though a source hdl file had >> been changed. but no source files have been changed. Its maybe like the >> filesystems are running off two different timestamp clocks? >> >> Any ideas? >> >> -Newman > > The timestamp thing can be a problem for make-based dependency checking > (which ISE uses). > > But if all the design files are stored on the same computer, I would think > that the timestamp would be consistent across all of them. > > As a practical matter, would it be possible for you to simply check the > time on all relevant computers? The first step to corrective action is to > determine what the problem is. Inconsistent time is one hypothesis, now > you should test it. If the time is consistent, then you will have to come > up with a new hypothesis. > > --Mac > Well, the files are stored on a remote computer. Looking at the file timestamps in an explorer window, I did notice that when I refreshed the window, some of the working files were later than the time displayed on the local computer (where the compilation takes place). I did bump up the local time by a couple of minutes in order to guess an approximate equal time for both computers, but I still experienced the same problem. -NewmanArticle: 80860
imanpreet@gmail.com wrote: > Hello, > > I am wondering if someone could clear this doubt for me, in case of > UART, the clock speed is 1.8432 MHZ, however it is able to transmit > maximum of 115,200 bps, however even if we are able to transmit at 1 > bit per cycle we should be able to transmit at 1,843,200 bps. What is > the rationale for making something go slowly, when it can go much > faster. You are correct, that it is technically possible to SEND at 1.842MHz, and some modern UARTS have this option ( see Oxford semi ). The problems are on RECEIVE, where historically a 16x sample rate was chosen for the RX Clock. The 16x is not mandatory, but a trade off between speed and clock drift/edge jitter tolerance. Some devices have variable sample rates : /8 is often seen, and even /5, and some uarts can output/input a clock, but by that stage they have morphed away from the ASYNC approach. Because the issue is one of phase of sample, not frequency per-se, with devices like FPGAs that have Digital Delay lines, you could design a UART with a RX clock == BAUD rate, and choose the delay line tap, on each start bit. -jgArticle: 80861
"Maxim S. Shatskih" <maxim@storagecraft.com> wrote in message news:d0vc6j$2a19$1@gavrilo.mtu.ru... > > requested for retransmission. Another way to combat data corruption is to > > add the redundancy into the data itself, by some forward error correction > > (FEC) mechanism. > > Most network media do not use ECC, they use the stupid checksums. The recovery > is done via retransmission. ECC is used in storage, not networking. If the retransmissions are very costly (e.g. half-duplex physical channels) but the BER (Bit Error Rate) is good enough, error correction is quite efficient. AlexArticle: 80862
Alexei A. Frounze wrote: > "Hal Murray" <hmurray@suespammers.org> wrote in message > news:tdednf4NTuXAL6_fRVn-vg@megapath.net... > > > I am wondering if someone could clear this doubt for me, in case of > > >UART, the clock speed is 1.8432 MHZ, however it is able to transmit > > >maximum of 115,200 bps, however even if we are able to transmit at 1 > > >bit per cycle we should be able to transmit at 1,843,200 bps. What is > > >the rationale for making something go slowly, when it can go much > > >faster. > > > > The traditional implementation uses a 16x clock on the receiver. > > > > It looks for the transition on the leading edge of the start bit, > > then starts counting. After 1.5 baud times, it samples the first bit > > in the byte. That's the middle of the bit cell. > > The oversampling not only makes it possible to determine the position of the > start bit, but also it combats the noise and errors. The 16 consecutive > sampled values (each being 0 or 1) are used to decide on the actual bit > value. If most of these 16 samples are 1, it's 1. Otherwise, it's 0. A > better (in terms of error immunity) UART is one that works with a current > loop instead of the regular that works with voltage. > Pardon me for being a novice. But, are you saying that in actual 16 samples will be transmitted of a single bit. And that the reason, we have the maximum transmission rate is because 16 copies of a bit will be transmitted. -- Imanpreet Singh AroraArticle: 80863
> Pardon me for being a novice. But, are you saying that in actual 16 > samples will be transmitted of a single bit. And that the reason, we > have the maximum transmission rate is because 16 copies of a bit will > be transmitted. > > > -- > Imanpreet Singh Arora > No. The 16 samples-per-bit is done on the receiving side. The receiver's sample clock is not locked to the transmitter's. So, because of this, it needs to sample many parts of what it thinks may be one bit. To determine if that bit was truly a one or a zero, many samples are taken. If, for example, 10 of the 16 samples were the same polarity, then it's safe-to-assume that the bit was that particular polarity. If the transmitter sent a clock along with the data, then the receiver could use that clock to sample the data only once (the same as just latching the data with its clock). If the transmitter didn't send a clock, but rather sent all data bits in a very periodic manner, then the receiver could use the data edges to infer (recover) where the center of each bit is. This method requires a fairly constant flow of data edges. Because of this, the data is usually scrambled to insure that this edge density requirement is met. These two methods of data transmission are referred to as "synchronous" transmission. On the other hand, a UART (what you're talking about) is used to send and receive data "asynchronously". Therefore the receiver doesn't really know where each data bit is supposed to be (in time). So, sampling and (some sort of) majority decision making is required. Hence, the lower data rate. BobArticle: 80864
Rudolf Usselmann wrote: > Jedi wrote: > > > Problem is eurocircuits web site is useless as it only works > with Internet Explorer 5.5 or higher (try to get a quote with > Firefox). Any provider that forces me to use a certain web > browser (IE) or operating system (Windows) is out of the question. Agree. Whenever the website asks me to upgrade my Mozilla or Konqueror to IE, I skip it. Two exceptions are ameritrade.com and usps.com. With ameritrade.com, I just click on 'stop' before it redirects itself to the 'error' page. With usps.com, There is a pop up but the website itself performs normally. vax, 9000 > > Regards, > rudiArticle: 80865
Thanks everyone. -- Imanpreet Singh AroraArticle: 80866
Hey all: I'm working with Altera's Stratix 1S40 dev kit, and am debugging a CMOS sensor interface where the FSM that control's the FIFO appears to be in no state. The purpose of this CMOS sensor interface is to simply take the 16-bit data from the sensor on every rising edge of the pixel clock (27 MHz) and make it available to the Nios 2 processor (via the Avalon bus; ~50 MHz). To this end, I've used a dual clock FIFO, with the write port connected to the sensor, and the read port to the avalon interface (+ glue logic, of course). I also have a 3-state FSM (to control the FIFO) that (state 1) simply waits for the frame from the sensor to become valid, then (state 2) wait for the horizontal line to be valid and finally (state 3) writes to the fifo until its full. Now the problem appears when I use Nios to transfer the contents of the FIFO directly to the UART (and to a computer for some post-processing); after a few 1000 pixels (around 15% of the total), the stream of data to the computer stops. After some debugging with Nios' GDB/Eclipse getup and SignalTap it appears that the constructed FSM is in no state (since the signaltap signals <state_signal>.<state_name> for each of the 3 states were low). During the compilation process, I received no warnings/errors regarding timing constraints etc. or anything else for that matter. I am at a complete loss as to how to even begin fixing this. Thoughts anyone? Thanks in advance, JamesArticle: 80867
have a problem when simulating with samsung ddr behavioral model, I have tested my design with micron behavioral model but micron is not accurate regarding the tDQSS time constraint. so I used the samsung model, but I don't know why the DQS bus in write transactions goes to X state. it seems that the behavioral model tries to force on the DQS while the controller tries to force on the DQS. so the result goes to X state. I am not sure, my design is in VHDL and the samsung model in Verilog I uses ModelSim 5.5e PLUS to do the simulation I dunno where is the problem, can anyone help? or guide me where to find reliable model rather than Denali's.Article: 80868
Bob wrote: >> Pardon me for being a novice. But, are you saying that in actual 16 >> samples will be transmitted of a single bit. And that the reason, we >> have the maximum transmission rate is because 16 copies of a bit will >> be transmitted. >> >> >> -- >> Imanpreet Singh Arora >> > > No. The 16 samples-per-bit is done on the receiving side. The > receiver's sample clock is not locked to the transmitter's. So, > because of this, it needs to sample many parts of what it thinks may > be one bit. To determine if that bit was truly a one or a zero, many > samples are taken. If, for example, 10 of the 16 samples were the > same polarity, then it's safe-to-assume that the bit was that > particular polarity. No, I dont think most UARTs work in this way. The reason for oversampling is to have a good determination of the sample point. The UART first tries to determine the middle of the start bit oversampling by 16, then it only samples once per bit. If you are sampling at the BAUD rate, then the error in the sample poin can be almost half a bit in both directions. If you are sampling at BAUD*16, then the error is 1/16 or 6,25%. On top of that error you have a deviation between the senders or receivers frequency vs the nominal BAUD rate. They can deviate in differing directions. After receiving 10 bits, your max deviation should be less than half a bit time. So the equation is 0,0625 + (10 x 2 x err) is less or equal to 0,5 solving the equation gives err </= 2,19% If you sample once per bit then your initial error is 0,5 0,5 + (10 x 2 x err) </= 0,5 This will work only if there is NO deviation between the clock frequencies. I.E: you need to have a common clock. If you sample 2 times per bit you error is 0,25 0,25 + (10 x 2 x err) </= 0,5 err </= 0,0125 So your receiver and transmitter must be much more close to each other. You can apply tricks which resynchrouize the receiver when an edge is detected but when you tranmit 0x00 you will have a low start bit followed by 8 low bit so there are no edges. ****************________________**************** start bit is used to determine the sample > If the transmitter sent a clock along with the data, then the > receiver could use that clock to sample the data only once (the same > as just latching the data with its clock). > > If the transmitter didn't send a clock, but rather sent all data bits > in a very periodic manner, then the receiver could use the data edges > to infer (recover) where the center of each bit is. This method > requires a fairly constant flow of data edges. Because of this, the > data is usually scrambled to insure that this edge density > requirement is met. > > These two methods of data transmission are referred to as > "synchronous" transmission. > > On the other hand, a UART (what you're talking about) is used to send > and receive data "asynchronously". Therefore the receiver doesn't > really know where each data bit is supposed to be (in time). So, > sampling and (some sort of) majority decision making is required. > Hence, the lower data rate. > > Bob -- Best Regards, Ulf Samuelsson ulf@a-t-m-e-l.com This message is intended to be my own personal view and it may or may not be shared by my employer Atmel Nordic ABArticle: 80869
Hej anyone who easily can tell me wich prom i have to add on this board : http://www.coreworks.pt/basicboard.htm I would like the prom to be big enough to contain so much data as i can use all the "space" in the fpga.. spartan2 300k i used just to upload, but have to use it to a school project and i want to order the prom's in a good time before i have to use them thanks -- Mvh KasperArticle: 80870
> On the other hand, a UART (what you're talking about) is used to send and > receive data "asynchronously". Therefore the receiver doesn't really know > where each data bit is supposed to be (in time). So, sampling and (some sort Yes, that's why it uses start and stop bits. -- Maxim Shatskih, Windows DDK MVP StorageCraft Corporation maxim@storagecraft.com http://www.storagecraft.comArticle: 80871
Hi James, If I understand you correctly, you are using a ONE-HOT encoding style (3 flops - one for each state) . Try to convert the FSM encoding style to GRAY and check it out. Maybe your FSM suffers from some asynchronous related problems, if the GRAY will reduce or eliminate the occuronce rate of this problem (no state problem) then you know that you have to check your design for async hazards. I hope that it is helpful, Moti.Article: 80872
Hi Kubik, Xilinx has a comand line application called NETGEN, I hope that it will do the trick for you.. Regards, Moti.Article: 80873
Hello, I'm getting a bachelors in Computer Engineering, and as a side project I would like to build a Spartan3 parallel ATA-3 (ide) interface. I've downloaded a copy of the ATA-3 specs (since this is free (and deprecated!)) and have an old hdd lying around. Looking at the voltage differences between the Spartan3 and the IDE interface, it is my guess that I don't need to do any fancy voltage level adjustments, just put 10 Ohm resistors in the path for unidirectional lines, and 100 Ohm resistors for the bi-directional lines. Does anyone have any helpful comments on this? I've tried google, and I came across a company who had a ATA interface (though their board was different, Spartan II as well) and that is all they (appeared) to have, this makes sense to me. Also, I will be starting a senior design project soon. My primary goal when I graduate is to move in to embedded devices/FPGA's. What would be a good senior project (from the employers perspective) and what should I have on my resume' to make me stand out (beside the VFW, 2nd bach part I already have ;) ). Thank you in advance. Rob ElsnerArticle: 80874
Hello, I would like to start a serious adventure in FPGA development, so which HDL would you recommend me? I can restrict myself to a single chip vendor (i.e. Altera, because their chips are quite cheap and very easy to obtain in small quantities in Poland), thus the spectrum of alternatives should be wider. The most important thing is good support of genericity, for instance: generic type vector{N} where {const N : positive} is group of N bits x : vector(32); -- x : std_logic_vector(31 downto 0) or even constants: generic const square{N} : type of N where {const N} is N*N; const nine : int is square{3}; The languages I know about are: VHDL: very disappointing, the nicest part of Ada has ben removed. No support for anonymous types, stiff and annoying syntax, weak interface inference (needs explicit component specifications). Verilog: same as above + lack of generate statements, so how does one specify generic pipelines (very useful e.g. in parallel CORDIC specifications)? AHDL: I don't know much about it, because I can't find a good manual. Best regards Piotr Wyderski
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z