Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Whoa, good, that's what I needed, a guy who knows a little more than I do about the Analog Ethernet requirements. I'll make the necessary updates on the site. I didn't try long cables, that probably explains it why I didn't run into problems. Will do, and try with 120ohms resistors. Thanks. Jean "Allan Herriman" <allan.herriman.hates.spam@ctam.com.au.invalid> wrote in message news:orb700t0u712oa3tri7b5ub9264vki9flf@4ax.com... > On Tue, 13 Jan 2004 08:08:22 GMT, "Jean Nicolle" > <j.nicolle@sbcglobal.net> wrote: > > >I've published a 4-steps "recipe" on how to send traffic. The experiment > >should be easy to follow through. > > > >You need: > >1. Any FPGA development board, with 2 free IOs and a 20 MHz clock. > >2. A PC with an Ethernet card, and the TCP-IP stack installed. > >3. Optionally, a network hub or switch. > >No Ethernet interface chip required. > > > >The Verilog source code (about 150 lines) is published here > >http://www.fpga4fun.com/10BASE-T0.html > > > >I'd be happy to hear how that works. > >The code has been tested in two different networks, with different FPGA > >boards from different vendors. > > > >I think the applications possible are pretty cool. > >Have fun! > >Jean > > Hi Jean, > > Some suggestions: > > 1. This web page http://www.fpga4fun.com/10BASE-T1.html > mentions that the Ethernet standard is "IEEE 802.3ae-2002". That's > the 10Gbps standard! I suggest you change this to "IEEE 802.3-2002 -- > Section One". > Here is the direct URL: > http://standards.ieee.org/getieee802/download/802.3-2002_part1.pdf > Chapter 14 (10Base-T) is the relevant part of that document. > > 2. There is no way your circuit will meet many of the Ethernet > electrical requirements (section 14.3 in the spec). Whilst it may > pass data, and be of great educational value, I think you really > should have a statement on your website saying that this isn't > Ethernet compliant, otherwise newbies will think that it really is > Ethernet and start designing it into their own products. > > 3. You probably do need the transformers, otherwise the section > 14.3.1.1 isolation requirements of 2400V (yes, that's 2.4 kV) are a > little difficult to meet. > The 1kV common mode surge requirments in 14.3.1.2.7 (transmit) and > 14.3.1.3.6 (receive) may also be difficult to meet without a > transformer. > > 4. The receiver may work better with a 120ohm (or so) resistor > between the RD- and RD+ pins. This should give the receiver a better > return loss (which is meant to be 15dB minimum, according to > 14.3.1.3.4), and might improve the performance on long cables. > > > BTW, what error rate did you get over 100m of cable? > > Regards, > Allan.Article: 64776
Perhaps someone out there has an idea for working around the issue I have... I want to use the fast (133MHz) version of the PCIX core in a server that boots with the bus in PCI mode. Once the bus is enumerated, etc. the server resets the bus and switches it into PCIX mode (Intel SE7501 chipset on an Intel motherboard). Currently, the fast PCIX core causes the bus to hang when it is accessed in PCI mode. I want to make my device appear invisible on the bus until the bus is up and running in PCIX mode. I know that this is not plug and play friendly, but we think we can work around it in our driver. My only thought so far is to internally gate the IDSEL input with PCIX_EN from the core. Has anyone else dealt with this problem and successfully worked around it? Thanks in advance! Mark Schellhorn ASIC/FPGA Designer Seaway Networks http://www.seawaynetworks.comArticle: 64777
One would think that for $18k you could get a core that would run both PCI64 at 66Mhz and PCIX at 133MHz depending upon availability. As it is, though, I have to recommend using the PCI64 core and running your device at 66MHz. Here's three reasons why I did it that way: 1) The core is a pain in the rear when it comes to passing timing specs, and with a -4 chip the 133 is impossible; 2) the majority of RAID and multimedia cards run PCI64, including Intel and Adaptec's own ZCR stuff, and as soon as you put in a ZCR your whole bus will kick down to 66MHz rendering your board useless; 3) I have been unable to get one of Xilinx's controllers to successfully hot swap. Has anyone else? I, like the poster, have been unsuccessful in this. What's the trick? "Mark Schellhorn" <mark@seawaynetworks.com> wrote in message news:twXMb.10066$881.1470800@news20.bellglobal.com... > Perhaps someone out there has an idea for working around the issue I have... > > I want to use the fast (133MHz) version of the PCIX core in a server that boots > with the bus in PCI mode. Once the bus is enumerated, etc. the server resets the > bus and switches it into PCIX mode (Intel SE7501 chipset on an Intel > motherboard). Currently, the fast PCIX core causes the bus to hang when it is > accessed in PCI mode. > > I want to make my device appear invisible on the bus until the bus is up and > running in PCIX mode. I know that this is not plug and play friendly, but we > think we can work around it in our driver. > > My only thought so far is to internally gate the IDSEL input with PCIX_EN from > the core. Has anyone else dealt with this problem and successfully worked around it? > > Thanks in advance! > > Mark Schellhorn > > ASIC/FPGA Designer > Seaway Networks http://www.seawaynetworks.com >Article: 64778
Brannon King wrote: > One would think that for $18k you could get a core that would run both PCI64 > at 66Mhz and PCIX at 133MHz depending upon availability. As it is, though, I > have to recommend using the PCI64 core and running your device at 66MHz. > Here's three reasons why I did it that way: 1) The core is a pain in the > rear when it comes to passing timing specs, and with a -4 chip the 133 is > impossible; 2) the majority of RAID and multimedia cards run PCI64, > including Intel and Adaptec's own ZCR stuff, and as soon as you put in a ZCR > your whole bus will kick down to 66MHz rendering your board useless; 3) I > have been unable to get one of Xilinx's controllers to successfully hot > swap. Has anyone else? I, like the poster, have been unsuccessful in this. > What's the trick? Consider getting source code. -- Mike TreselerArticle: 64779
Hi Mark, I hope you don't mind that I've taken the liberty to answer the questions of several others in this email. You raise a good question. The behavior you see in your system is certainly not what is recommended in the PCI-X Addendum in terms of system initialization. The PCI-X Addendum suggests that a system evaluate M66EN and PCIXCAP to determine the lowest common denominator of bus mode, and then use that information to reset the bus into a mode that is appropriate. Section 9.10, paragraph four, from PCI-X Addendum 1.0b says: > A PCI-X system provides a circuit for sensing the state of the > PCIXCAP pin (see Section 14). Perhaps your system does not provide or use this circuit for sensing the states of M66EN and PCIXCAP. That aside, what your system is doing should still work, but it requires your card support operation in PCI mode (which is actually required for a compliant design). With the PCI-X core, you have several implementation options: * bitstream for PCI, 33 MHz with PCI-X, 66 MHz * bitstream for PCI, 33 MHz * bitstream for PCI-X, 66 MHz * bitstream for PCI-X, 133 MHz These implementations require different speedgrades. You can consult the datasheet or implementation guide for details. The first option is good if you require moderate performance and you want the simplicity of a single bitstream design. If you require ultimate performance, some extra steps are required. You would need to generate a PCI-X 133 MHz bitstream in addition to a PCI 33 MHz bitstream (this does not require redesign of the user application, simply a synthesis and place and route with different options). Then, you need to perform run-time reconfigurations whenever the busmode changes. The core has an output, RTR, indicating the wrong bitstream is loaded. One way to implement this is with a small CPLD and two PROMs. I'm sure there are others. Mike Treseler wrote: > Consider getting source code. That is considerably more expensive than $18k. If you want to explore that avenue, you should contact your local FAE. Brannon King wrote: > One would think that for $18k you could get a core that would > run both PCI64 at 66Mhz and PCIX at 133MHz depending upon > availability. I think $18k is a great deal for what you get. If you prefer less expensive: http://h18000.www1.hp.com/products/servers/technology/pci-x-terms.html This one is free. However, there are hidden costs in compliance verification and implementation details. And, you don't get any support. It may give you the opportunity, though, to pare the logic down to the bare minimum for your application -- which often helps performance. > 1) The core is a pain in the rear when it comes to passing timing > specs, and with a -4 chip the 133 is impossible For PCI at 33 MHz with our core, timing is a slam dunk. For PCI-X, at any frequency, the I/O timing is guaranteed if you use the parts and speedgrades listed in the datasheet (-4 is not among those...) The difficulty of the internal PERIOD constraint is a function of the core AND the user design, so it cannot be guaranteed. I do recognize that 133 MHz internal operation can be difficult. > 2) the majority of RAID and multimedia cards run PCI64, including > Intel and Adaptec's own ZCR stuff, and as soon as you put in a ZCR > your whole bus will kick down to 66MHz rendering your board useless In order for this to happen, there would have to be two slots on a 133 MHz bus segment. The motherboards I have seen only have one slot on a 133 MHz bus segment. If you have specific motherboards that have two slots on a 133 MHz segment, it would be useful to me to know what brand/part number. This behavior is more likely to happen if you have a four slot PCI-X 66 MHz bus, and you plug in a mix of PCI-only and PCI-X cards. The bus does have to run at the lowest common denominator. If one card doesn't support PCI-X, the bus does have to run in PCI mode. And if you are using our core, it requires 33 MHz in PCI mode. > 3) I have been unable to get one of Xilinx's controllers to > successfully hot swap. Has anyone else? I, like the poster, > have been unsuccessful in this. You should file a case with Xilinx Support so that someone can systematically debug the issue with you. Not knowing the failure mechanism, I can't speculate what might be the issue. EricArticle: 64780
Dont forget to check your hardware. Bypass caps, Layout, Power supply, Reference voltages Maybe you have or you can borrow a good osci or LA to measure what is going wrong. "etrac" <etraq@yahoo.fr> wrote in message news:c99b95c7.0401070133.38f7e294@posting.google.com... > Hello, > > I have implemented my own SDRAM controller in a Virtex II component in > order to use SDRAM modules Sodimm-PC133 (133 MHz frequency). > > My problem is that this block seems to work very well with MICRON > Sdram modules, but it is not fully stable with SMART modules. It seems > to be the burst reading which causes some bit errors (not many, we > have at worst 25 bit errors on 32Mb files). > > I think the FPGA block is OK, routing timings are correct, and I think > my problem may be on SDRAM timings. I used 180° phase of my DCM to > generate control signals and bring back datas, in fact I work on the > falling edge of the SDRAM clock. I have tried to work on the rising > edge but then results are much uncertain ! > > So my question is : Do you had some timing problems when controlling a > Sdram ? On which edge do you work ? > > etracArticle: 64781
The BUFG primitive is available for Spartan-IIE. Which tool or documentation indicates that it not available? Depending on which synthesis tool you use, BUFG may be automatically instantiated in your design. If you want to directly instantiate a BUFG primitive, here the template provided by the "Language Templates" option inside the ISE 6.1i Project Navigator. To open the templates, start Project Navigator and click Edit --> Language Templates. Expand the "VHDL" item in the selection tree, then "Component Instantiation", the "Global Clock Buffer", then "BUFG". Note: You can also invoke Language templates by clicking the "light bulb" icon in the upper right menu bar. Here is the Language Templates template for the global clock buffer. ____________________________________ -Instantiating BUFGP on Input Port -- INPUT_PORT: in std_logic; --**Insert the following between the -- 'architecture' and 'begin' keywords** component BUFGP port (I: in std_logic; O: out std_logic); end component; --**Insert the following after the 'begin' keyword** signal CLK_SIG: std_logic; U1: BUFGP port map (I => INPUT_PORT, O => CLK_SIG); --------------------------------- Steven K. Knapp Applications Manager, Xilinx Inc. General Products Division Spartan-3/II/IIE FPGAs http://www.xilinx.com/spartan3 --------------------------------- Spartan-3: Make it Your ASIC "Tobias Möglich" <Tobias.Moeglich@gmx.net> wrote in message news:4003F514.3258F24B@gmx.net... > I use the Spartan-IIE from Xilinx > BUFG ist not available for this device. What can I use instead? > > And how do I have to implement it in the ucf-file or in the source file > (vhd-file)? > > Tobias.Article: 64782
"Patrick MacGregor" <patrickmacgregor@comcast.net> wrote in message news:<_KWdnR3ynYJgd2CiRVn-jw@comcast.com>... > Last year X announced a cool design win using their parts in a new Gibson > guitar line. Neat stuff. > > Couple days ago I see that A has a press release saying they stole the > business with NIOS + Cyclone. > > Today I see X saying S3 is the clear winner and Gibson is using it > exclusively. > > Anyone know what is really going on? > > Just curious. The Gibson product is kinda cool regardless of who's part is > in it. I don't really care about this particular product (I'm old school and my Les Paul goes into a '62 Fender Bandmaster), but I wanna know the REAL story as to why Les Pauls, even the expensive ones, require a fret dress right out of the box. -aArticle: 64783
Allan, > It's actually quite easy to make races in VHDL. I agree this is a clock to data race condition, however, it is also easy to avoid and easy to even forget the rule exists. Are you suggesting that Verilog race conditions are as simple to solve as this? :) After reading Cliff's papers, I would conclude that it is not an insignificant issue in Verilog. > Experience indicates that even simple examples > may produce different results on different > LRM compliant simulators. My read on the LRM and simulation cycle says that if two simulators execute your example differently, one of them is not compliant. Have you seen different results for a delta cycle situation like this that was not a simulator bug? Regards, Jim > Jim, when discussing the relative merits of Verilog > and VHDL it is important not to make false claims > about either language. Oops, it was not intentional. Verilog does have a large, common problem in this area. With VHDL it is minor enough to easily forget about it. Allan Herriman wrote: > On Mon, 12 Jan 2004 15:05:11 -0800, Jim Lewis <Jim@SynthWorks.com> > wrote: > > >>VHDL never had race conditions and never will. > > > It's actually quite easy to make races in VHDL. Experience indicates > that even simple examples may produce different results on different > LRM compliant simulators. > > > signal clk1, clk2 : std_logic; > signal sig1 : std_logic; > > ... > > clk2 <= clk1; -- clk2 lags clk1 by 1 delta > > process (clk1) > ... > sig1 <= foo; > ... > > process (clk2) > ... > bar <= sig1; > ... > > > Astute designers will modify their coding standards to disallow clock > assignments such as the one above. > > Jim, when discussing the relative merits of Verilog and VHDL it is > important not to make false claims about either language. > > Regards, > Allan. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Jim Lewis Director of Training mailto:Jim@SynthWorks.com SynthWorks Design Inc. http://www.SynthWorks.com 1-503-590-4787 Expert VHDL Training for Hardware Design and Verification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Article: 64784
Here is what I learned from our applications group: The BUFG primitive is available for Spartan-IIE. Which tool or documentation indicates that it not available? Depending on which synthesis tool you use, BUFG may be automatically instantiated in your design. If you want to directly instantiate a BUFG primitive, here the template provided by the "Language Templates" option inside the ISE 6.1i Project Navigator. To open the templates, start Project Navigator and click Edit --> Language Templates. Expand the "VHDL" item in the selection tree, then "Component Instantiation", the "Global Clock Buffer", then "BUFG". Note: You can also invoke Language templates by clicking the "light bulb" icon in the upper right menu bar. Here is the Language Templates template for the global clock buffer. ____________________________________ -Instantiating BUFGP on Input Port -- INPUT_PORT: in std_logic; --**Insert the following between the -- 'architecture' and 'begin' keywords** component BUFGP port (I: in std_logic; O: out std_logic); end component; --**Insert the following after the 'begin' keyword** signal CLK_SIG: std_logic; U1: BUFGP port map (I => INPUT_PORT, O => CLK_SIG); ============================= Tobias Möglich wrote: > > I use the Spartan-IIE from Xilinx > BUFG ist not available for this device. What can I use instead? > > And how do I have to implement it in the ucf-file or in the source file > (vhd-file)? > > Tobias.Article: 64785
On Tue, 13 Jan 2004 15:17:18 -0800, Jim Lewis <Jim@SynthWorks.com> wrote: >Allan, > > It's actually quite easy to make races in VHDL. >I agree this is a clock to data race condition, >however, it is also easy to avoid and easy to >even forget the rule exists. > >Are you suggesting that Verilog race conditions >are as simple to solve as this? :) After reading >Cliff's papers, I would conclude that it is not >an insignificant issue in Verilog. Races in Verilog can also be avoided by following a coding standard. I agree they are easier to make, though. I have never been bitten by a race in Verilog, whereas I have been bitten by races in VHDL on several occasions! This has always in other people's code though. This may be a reflection of the relative amount of time I've spent with the two languages, rather than any property of the languages themselves. > > Experience indicates that even simple examples > > may produce different results on different > > LRM compliant simulators. > >My read on the LRM and simulation cycle says that >if two simulators execute your example differently, >one of them is not compliant. >Have you seen different results for a delta >cycle situation like this that was not a >simulator bug? I didn't think the VHDL LRM defined the order in which processes are executed. Can you point to the particular part of the LRM that says this? Without a defined order, different simulators may produce different results. Yes, I have seen differences between VHDL simulators that weren't considered to be bugs (Simili vs Modelsim, using the delta delayed clock example from an earlier post). > > Jim, when discussing the relative merits of Verilog > > and VHDL it is important not to make false claims > > about either language. >Oops, it was not intentional. >Verilog does have a large, common problem in this area. >With VHDL it is minor enough to easily forget about it. I disagree with the "forget about it" part. Races in VHDL do cause problems in real-world designs. I have seen plenty of examples (including a broken ASIC). This seems to be more of a problem in testbenches, because designers typically don't introduce delta delays to clocks in synthesisable code. Regards, Allan.Article: 64786
One can also do this by reference using a "cat bus". For example, suppose you have the named signals, A, B, and C. You can create an 8-bit bus by naming it: "A,B,B,C,A,A,A,C" (remove quotes, can't remember if you might need parenthesis around the whole mess) Philip Freidin wrote: > On Wed, 07 Jan 2004 21:36:15 -0000, ad.rast.7@nwnotlink.NOSPAM.com (Alex Rast) wrote: > > A) > >>What's the way to do this? It's common for me to run into situations where >>I have a bus or bus pin, and I need to connect the same net to different >>lines on the bus. > > > B) > >>Another common one is I have 2 busses, both of which have >>a line that should connect to a single net. The documentation doesn't seem >>to give any hints. Thanks for any input. > > > While I have not used ECS, the way we did this in previous schematic > systems was to pass the source single through multiple "BUF" symbols. > <snip> > > Philip > > > > =================== > Philip Freidin > philip@fliptronics.com > Host for WWW.FPGA-FAQ.COMArticle: 64787
I have an SRAM controller Avalon slave we created which Nios can successfully access to read and write SRAM. My question is: I can not set the program or data memory address in the Nios setup screen to this memory. Why not? Also: How does Nios determine where a malloc() will allocate memory from? Let's say I have GERMS on-chip in M4Ks, I have an on-chip MegaRAM slave, and an SRAM controller. I want Nios to malloc() big chunks from the SRAM. How do I set this up? Presently, a malloc of even 4 bytes returns NULL for some reason. PS. I can successully run Nios code from the SRAM.Article: 64788
>>>Experience indicates that even simple examples >>>may produce different results on different >>>LRM compliant simulators. >> >>My read on the LRM and simulation cycle says that >>if two simulators execute your example differently, >>one of them is not compliant. >>Have you seen different results for a delta >>cycle situation like this that was not a >>simulator bug? > > > I didn't think the VHDL LRM defined the order in which processes are > executed. Can you point to the particular part of the LRM that says > this? Without a defined order, different simulators may produce > different results. It does not specify an order between processes running. It does specify when signals get updated vs. when processes get run. For any given execution cycle, first all signals that are scheduled to change on that cycle are updated and then processes are run. Hence in your example, both clk2 and sig1 get updated and then the process is run. As a result, as modified, the two processes simulate as if there is only one register. signal clk1, clk2 : std_logic; signal sig1 : std_logic; clk2 <= clk1; -- clk2 lags clk1 by 1 delta process (clk1) begin if rising_edge(clk1) then sig1 <= foo; end if ; end process ; process (clk2) begin if rising_edge(clk2) then bar <= sig1; end if ; end process ; In VHDL, (for the most part), order of execution of processes does not matter since we primarily use signals (which incur the delta cycle assignment delay). In Verilog, blocking assignments are similar to shared variables without restrictions. Hence, always block execution order comes into play. >>Oops, it was not intentional. >>Verilog does have a large, common problem in this area. >>With VHDL it is minor enough to easily forget about it. > > > I disagree with the "forget about it" part. Races in VHDL do cause > problems in real-world designs. I have seen plenty of examples > (including a broken ASIC). This seems to be more of a problem in > testbenches, because designers typically don't introduce delta delays > to clocks in synthesisable code. This will only happen when it is the same clock (and you should have not touched it) or it is a derived clock and the two clocks are delta cycle aligned (or close enough so that clk-q propagation delay is bigger than the clock skew). So if your designs are a single clock or multiple unrelated clocks, no problem. I would argue that we remember what caused us or our colleagues the most problems. So the things that I remember best are: Model timing in memory when timing is > 1 clock period. Made a design non-functional because the designer got data immediately. Respin of an ACTEL 1280 FPGA at $$$/piece (if I recall it was $500 US in 1992). I am guilty of writing the memory model. By the time we found the bug in the lab, the designer had gone to another project so I got to redesign his chip. Drive busses to "XXX" when they are idle. The external IP testbench model drove cycle type lines of an X86 processor to the same given cycle during idle. The cpu interface had a bug. It was fixed with a $.50 board part (which cost $50K year due to board volume). Be aware of inertial delay: I hardly remember this one, but I had some mysterious cancellations of signal waveforms (testbench) in my first project. Rather than figure out if it was an understanding issue or a bug, I replaced the after statement with a wait statement and went on. These may mean nothing to you as you may have never encountered them. I think you also remember things as to how you classify them. I never thought of this as clk-data race. I always thought of lumped with aligned derived clocks and file it under, "keep related clocks delta cycle aligned." Cheers, Jim -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Jim Lewis Director of Training mailto:Jim@SynthWorks.com SynthWorks Design Inc. http://www.SynthWorks.com 1-503-590-4787 Expert VHDL Training for Hardware Design and Verification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Article: 64789
>"Hard-to-copy" means that if you make a mistake, you are toast. It is >inevitable that customers do not think of absolutely everything. The >new architecture will also allow us to provide EasyPath(tm), which if >the customer calls up (which has happened, and boy are they glad they >went with Xilinx!), and says "I forgot an inverter" we just modify the >test program, and they are back in business IMMEDIATELY, with only a >small charge (perhaps very small) for the modified test program work >that we have to do. No one else can do this, either! In fact, >EasyPath(tm) can be done for a few images (bistreams) and still retain >the savings. This fits it well with customers that wish to have single >platforms that perform multiple functions (ie cellular basestations >which are notorious for needing major changes every three months). I don't understand that. I assume we are discussing saving $ by only testing the parts of the chip that will be used rather than the whole thing. If the base stations are changing every three months, are they just switching between a small collection of choices that are known ahead of time, or are they rolling out new features? If they are rolling out new features, then they are probably making interesting changes to the FPGA code and probably would want the fully tested version rather than the cheaper one that might not run the new code. I like the idea of saving money, but I've never worked on gear that didn't get fixed/upgraded in the field. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 64790
>I disagree with the "forget about it" part. Races in VHDL do cause >problems in real-world designs. I have seen plenty of examples >(including a broken ASIC). This seems to be more of a problem in >testbenches, because designers typically don't introduce delta delays >to clocks in synthesisable code. Why doesn't the compiler/simulator catch that case and give a warning/error message? Compile time checking is good. I remember when I was first learning about type checking. I didn't understand what was going on yet, just thrashing around until the compiler stopped complaining. Then one night the compiler slapped my wrist because I used an XXX rather than a pointer to an XXX and I figured out what was going on. That's the sort of thing that takes ages to debug the hard way. That error message had just saved me a lot of work. The really great thing about strong type checking is that you can make massive changes to your code, and the parts that don't get checked often (error handling) will still work. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 64791
Rene Tschaggelar <none@none.none> wrote in message news:<41c489438e778508c6ca433ed9cabaf5@news.teranews.com>... > Browsing the 'cyclone device handbook' I spent a great > length to find : > -the max current for each supply ( VCCIO & VCCINT ) > -the expected clocking frequency at the input. I'm aware > that 1MHz may not be sufficient to PLL it up to 400MHz > or such. > > to little avail. While I can live with 2 switchmode supplies > generating 1.5V and 3.3V at 2A each, and a generic 8pin > socket to swap oscillators for a prototype, the documentation > is somehow inclomplete. > > Rene Hi Rene, See section 4 of the Cyclone device handbook. (Available at http://www.altera.com/literature/hb/cyc/cyc_c51004.pdf). Page 4-8 gives the current & power during configuration, and refers you to the Cyclone power calculator spreadsheet for doing what-if analysis on application circuits power and current needs while they're running. That spreadsheet is at: http://www.altera.com/products/devices/cyclone/utilities/power_calculator/cyc-power_calculator.html You'll have to enter what you think are reasonable worst-case numbers in terms of operating frequency, toggle rate, IO standards used etc. to get the current supply info you need. Page 4-31 of the device handbook gives the PLL frequency specs. The input frequency has to be between 15.625 MHz and 464 MHz for the fastest (-6) speed grade, and between 15.625 MHz and 387 MHz for the slowest (-8) speed grade. See http://www.altera.com/literature/hb/cyc/cyc_c51004.pdf for details. Regards, Vaughn AlteraArticle: 64792
The original post expired before I answered, so the original question is included below. Hi Jon, Sorry I didn't follow up to your post -- I was on vacation, and now google won't let me post a follow up (guess it expired). I've posted a follow-up now. This data is inside the QUIP documentation. Go to https://www.altera.com/support/software/download/altera_design/quip/quip-download.jsp to download it. The information on how to directly instantiate Stratix or Cyclone logic cells in any legal mode is in the stratix_wysuser_doc.pdf document. You can instatitate these primitives directly in verilog or VHDL files. Regards, Vaughn Altera =========================================================== From: J.Ho (hooiwai@yahoo.com) Subject: How to explicitly call out cell elements in Altera Stratix? This is the only article in this thread View: Original Format Newsgroups: comp.arch.fpga Date: 2003-12-05 17:31:30 PST Hi all, In Xilinx Virtex world, each element in the logic cell has a name and can be explicitly instanstiated, such as "muxcy_l" etc... Is there a way to do the same thing for the Altera Stratix device? In Xilinx data sheet all those cell elements has a name associated with it in the figure, so it made it easy to know which element to call up from the virtex library in the synthesis tool. I can't find any reference in the Altera document however, so just by inspecting the stratix hdl technology/timing model library I can't be sure which carry mux is which in the logic element. Would someone who had hand massage the code with technology elements share their method or the reference material from the vendor? Thanks! JonArticle: 64793
Hi, May be you didn't compile libruary.... If you have ise 5.2 or over then #15338 in Xilinx support... You can find compxlib. I can't english...sorry ^^* Good luck...bye "Amontec Team, Laurent Gauch" <laurent.gauch@amontecDELETEALLCAPS.com> wrote in message news:4003e953$1@news.vsnet.ch... > Hi, > > I have some troubles simulating a clkdll primitive with modelsim. > > I included a clkdll mapping in my VHDL project to do a clk2x and clk4x. > After synthesis, all is working fine about frequency value (I have a 40 > - 80 - 160 MHz). > > But now I have to simulate all of this with the main design. > BUT how can I simulate CLKDLL without body description of the unisim > library. > > For now, I just did a new VHDL architecture for my CLKDLL. But are there > a better solution for simulation! > > Best Regards, > Laurent Gauch > > > ------------ And now a word from our sponsor ------------------ > Do your users want the best web-email gateway? Don't let your > customers drift off to free webmail services install your own > web gateway! > -- See http://netwinsite.com/sponsor/sponsor_webmail.htm ----Article: 64794
Can anybody tell me what is the effect of debug option in nios-build command to the original program? What's the usage of debug script that generated in the command? I'd connected my processor as a user defined peripheral to nios system module through SOPC Builder. The system performs corectly only when I use debug option during nios-build- to compile the C/C++ source code. Without the debug option, the data that fetched from my processor will be the data before I initiate the data transaction. I'm suspect that is the timing problem. But the system works correctly only after the debug option is used.Article: 64795
It is necessary simulation model of Sram (AS7C256-12PC). Who can help?Article: 64796
"Maxlim" <maxlim79@hotmail.com> wrote in message news:a6140565.0401140148.3e3c984b@posting.google.com... > Can anybody tell me what is the effect of debug option in nios-build > command to the original program? What's the usage of debug script that > generated in the command? > > I'd connected my processor as a user defined peripheral to nios system > module through SOPC Builder. The system performs corectly only when I > use debug option during nios-build- to compile the C/C++ source code. > Without the debug option, the data that fetched from my processor will > be the data before I initiate the data transaction. I'm suspect that > is the timing problem. But the system works correctly only after the > debug option is used. Turning on the debug option will do two things to the generated code - it will include debug information in the object files (so that a debugger can match up variables and addresses, and object code and source code), and it will turn off (or at least lower) the optomisation levels, making the generated code more debugger-friendly (trying to debug highly optomised code can sometimes be very confusing). The chances are, therefore, that your problem is that your code works with low optomisations and fails on high optomisations. This means either the compiler's optomiser is broken (possible, but unlikely), or your code is making unwarented assumptions about the generated object code. Very often, this can be solved by correct use of "volatile". For example, consider code such as: void generateSquareWave(void) { int i; while (1) { outputPort = 0xffff; for (i = 0; i < 1000; i++) ; // Wait a bit outputPort = 0x0000; for (i = 0; i < 1000; i++); // Wait a bit }; } Many people would think that this generates a slow square wave. With optomisation turned off, it probably will. But with high optomisation, the compiler will notice that the for loops don't actually do anything useful - they simply waste time, which is exactly what the optomiser is trying to avoid. It will therefore simply drop them, giving you an extremly fast square wave. The simple solution to this is to make "i" a "volatile int", which tells the optomiser that you want to keep every read and write to the variable, and thus the delays stay in. Other typical effects are that the optomiser will often re-arrange code to make it smaller or faster, even if that means changing the order in which data is read or written (or even *if* it is read or written). Again, "volatile" will help you enforce the correct order.Article: 64797
Hi, there: I am facing a complicated design with many internally generated clocks, see below. Now my problem is, when I define the clocks in UCF file, ISE6 gave me warnings and remove all of the clocks from NGDBuildat the the initialization phase of partial reconfiguration design(meaning all black boxes for clk_gen, a, b, & c). Is this the correct behavior for partial reconfiguration design flow? I am wondering how will the final assembly work when all the clocks are removed. Thank you very much for your ideas. Best Regards, Kelvin ##############################Constraints... NET "internal_clock1_o" TNM_NET = "internal_clock1_o"; TIMESPEC "TS_internal_clock1_o" = PERIOD "internal_clock1_o" 1000 ns HIGH 50 %; NET "internal_clock2_o" TNM_NET = "internal_clock2_o"; TIMESPEC "TS_internal_clock2_o" = PERIOD "internal_clock2_o" 1000 ns HIGH 50 %; NET "internal_div_o" TNM_NET = "internal_div_o"; TIMESPEC "TS_internal_div_o" = PERIOD "internal_div_o" 2000 ns HIGH 50 %; INST u_1 LOCK_PINS; INST u_2 LOCK_PINS; INST u_3 LOCK_PINS; INST "u_1" LOC = "BUFGMUX7P"; INST "u_2" LOC = "BUFGMUX6S"; INST "u_3" LOC = "BUFGMUX1P"; /////////////////////////////////////////////Top level source code. module my_top( clk, clk_gate1, clk_gate2, ... ); wire internal_clock1; // Gated wire internal_clock2; // Gated wire internal_div; // Divide by two wire internal_clock1_o; // Gated wire internal_clock2_o; // Gated wire internal_div_o; // Divide by two clk_gen( clk, clk_gate1, clk_gate2, clk_gate3, internal_clock1_o, internal_clock_o, internal_div_o ); BUFGMUX u_1( .O(internal_clock1), ...); // Instantiate a clock buffer. BUFGMUX u_2( .O(internal_clock2), ...); // Instantiate a clock buffer. BUFGMUX u_3( .O(internal_div), ...); // Instantiate a clock buffer. my_module1 a( internal_clock1, ........................); my_module2 b( internal_clock2, ........................); my_module3 c( internal_div, ........................); endmodule /////////////////////////////////////////////Warnings..... Checking timing specifications ... WARNING:XdmHelpers:625 - No instances driven from signal "internal_clock1_o" are valid for inclusion in TNM group "internal_clock1_o". A TNM property on a pin or signal marks only the flip-flops, latches and/or RAMs which are directly or indirectly driven by that pin or signal. WARNING:XdmHelpers:644 - No appropriate elements were found for the TNM group "internal_clock1_o". This group has been removed from the design. WARNING:XdmHelpers:807 - The period specification "TS_internal_clock1_o" was removed because the "internal_clock1_o" group was removed. -- ---------------------------------------------------------------- Xu Qijun OKI Techno Centre (Singapore) Pte Ltd TEL: 65-6770-7081 FAX:65-6779-2382 EMAIL: qijun677@oki.com ---------------------------------------------------------------- Warning: Although the company has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments.Article: 64798
Hello, Can i get a sample XSVF file or SVF file? Thanks and Regards, SrilekhaArticle: 64799
On Wed, 14 Jan 2004 03:54:16 -0000, hmurray@suespammers.org (Hal Murray) wrote: >>I disagree with the "forget about it" part. Races in VHDL do cause >>problems in real-world designs. I have seen plenty of examples >>(including a broken ASIC). This seems to be more of a problem in >>testbenches, because designers typically don't introduce delta delays >>to clocks in synthesisable code. > >Why doesn't the compiler/simulator catch that case and give a >warning/error message? It's not mentioned explicitly in the LRM, and it's not something that is encountered often. Regards, Allan.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z