Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
"Naimesh" <naimesh.thakkar@gmail.com> wrote in message news:1099884025.786270.61150@f14g2000cwb.googlegroups.com... > How do I buffer a signal in XST as I feel it is getting overloaded. > if I write > Signal1 <= Signal; > XST automatically optimizes it. if the signal is a clock signal you can instantiate a BUFG or multiple BUFGs to optain load manual load balancing AnttiArticle: 75501
johnsonlee@itri.org.tw (Johnson Lee) wrote > > To summarise :- > > > > Test 1: PC LPT-->FPGA-->LCD. FAILS. > > Test 2: PC LPT-->CPLD-->LCD. FAILS. > > Test 3: PC LPT-->LCD. No CPLD. No FPGA. Works OK. > > Test 4: CPLD-->LCD. No PC. Works OK > > Test 5: FPGA-->LCD. No PC. Works OK. > > > > Is this correct? > > Did you check _all_ outputs using an oscilloscope in tests 1+2? > > Was the LCD connected to the same outputs in 1+2 as it was in 4+5? > > Are you using the Altera Quartus development environment? > > Could you post sample Quartus archive (.QAR) files? > > Hi Andrew, > Yes, you are right about those 5 different tests! > And I didn't check all outputs using oscolloscope only Enable pin and > some data bits. I can see from oscilloscope when enable pin is > initiate, the data pin voltage will sweep between 3.0V to 5.0 when LCD > module is switched on, but remain 3.0V when LCD is off. Those voltages don't sound right. The logic should swing from 0 to Vcc. What power supply voltage are you running the CPLD / FPGA at? 5.0V or 3.3V? What power supply voltage are you running the LCD at? You're not missing a ground connection somewhere are you? > Same IO assignment in those files... > Ya, I can show you my code! > But I don't know how to do that! > Mail the .QAR to you directly? Can you put it on a web server and post the URL? If not, my mail is http://www.holmea.demon.co.uk/IMG/email.gifArticle: 75502
I bought the ML310 board from Xilinx just recently. I came across the same question so I contacted Xilinx directly and asked them that very question. As a result I got the information that they plan to develop and provide such dedicated personality boards but haven't got them finished so far. The only thing you can do right now is to get the counterpart of the tyco connector and tinker your own adapter module. So did I! :-) Regards, Quinn "bh" <spam_not@nosuch.com> schrieb im Newsbeitrag news:MhBjd.41669$fF6.17119968@news4.srv.hcvlny.cv.net... > The Xilinx ML310 evaluation board has a set of connectors > for high-speed interfaces using a Tyco Z-Dok+ interface (6.25Gps). > > I've seen in some of the Xilinx brochures a proto board that mates > to these connectors. Does Xilinx provide this board, or is there > a third party that makes the personality module interface boards? > > The adapter board connector is a Tyco part 1367555-1 which is > a Z-Dok+6 connector. > > I've even seen reference to proto boards like on hitechglobal > that call out the HW-V2P-PM which is described as > "Virtex-II Pro ML310 Personality (Conversion) Modules" > But I can't see any place to order such a thing. > > Anyone have any pointers? > >Article: 75503
Hi, Marc thank you for your answer. > The short answer is that I'm not aware of a _good_ way to specify it > the way that you have described. In fact, some older (but not very > old!) FPGAs chips won't even let you output the clock to a general > purpose I/O pad, regardless if you're using it for a synchronous > interface or not! > The typical way to do what you want is to use a DDR FF, if the device > supports it. If it doesn't support it, you might be able to use a 2x > internal clock to clock out a toggling 0/1 pattern from the I/O FF. Yes, I have used DDR clock forwarding, but then it gets device dependent. Maybe I have to go for that here too. > If none of those are options, the longer answer is that you may not have > to specify it the way you are thinking of. What clock rate and how much > timing margin do you have? In the past, I have used a MAX_DELAY > constraint on the clock net itself, and if memory serves, this actually > appeared to make sure the tools routed the clock to the I/O pad with a > reasonable amount of prop delay. It won't be 500ps or anything, but it > kept the delay in the low single digits. OK, I have used MAXSKEW on my clock, but I'll look into maxdelay as well. > Have fun, Sure I will! :-) > Marc HakonArticle: 75504
Hello, against the background of partial reconfiguration I want to implement a communication system for distribution of control information. Each reconfigurable area (slot) is a subscriber. The communication system straddles the complete FPGA from east to west. It shall work even a slot is under reconfiguration. Therefore some logic have to be fix in each slot. Just now my questions: 1. Will this approach work? 2. If the logic is fixed, do I need long lines and TBUFs, as used in conventional bus macros? Usually bus macros are required to provide fixed communication points within a slot. 3. How I have to implement something like this? Which XAPPs affect it? I would like to treat it as a bus macro, for application in my complete design. 4. Would this core divide the slot in two parts, prohibiting routing from north to south? Bye TomArticle: 75505
Hi Antti, Thanx for your reply..I found one more solution to this problem,as u said i can synthesize my verilog component using any other 3rd party tool and add it as a netlist ,while adding the peripheral using import peripheral wizard.This should work.Haven't tried yet.just thought of this..... :-) ~mack. mmkumar@gmail.com (mack) wrote in message news:<aba94305.0411070156.5833f7fa@posting.google.com>... > Hi, > I want to add my peripheral(which has both VHDL and Verilog design > files) into xilinx MB system.I am using XILINX 6.3 currently.I went > thru CREATE/IMPORT PERIPHERAL WIZARD.There in no option available > there to mention that my design has both VHDL and Verilog.But I could > succesfully add my peripheral(rtls) thru the peripheral wizard.But the > pcore/<design_top>/HDL/VHDL is only created and all my verilog RTLs > are present in VHDL folder only.When i generated bitstream ,it gave me > error saying that"fle not found in repository" ../hdl/vhdl/abc.vhd" > .But abc component is my verilg module.Kindly help me out in this... > > Regards, > Mack..Article: 75506
"Naimesh" <naimesh.thakkar@gmail.com> writes: >How do I buffer a signal in XST as I feel it is getting overloaded. >if I write >Signal1 <= Signal; >XST automatically optimizes it. You can add "-max_fanout 50" (or whatever fanout you want) to the .xst-configfile or edit the appropriate GUI-form.... That setting is global, but it should be possible to pass it as an signal attribute. -- Georg Acher, acher@in.tum.de http://wwwbode.in.tum.de/~acher "Oh no, not again !" The bowl of petuniasArticle: 75507
Hello Is there a way to specify a maximum signal fan-out for Altera Flex10K FPGAs? The option seems only be available for more recent families. I am explicitly duplicating high-fanout signals and putting "preserve" constraints but it is long since I don't always know signals fan-out. -- ____ _ __ ___ | _ \_)/ _|/ _ \ Adresse de retour invalide: retirez le - | | | | | (_| |_| | Invalid return address: remove the - |_| |_|_|\__|\___/Article: 75508
> > Hi Andrew, > > Yes, you are right about those 5 different tests! > > And I didn't check all outputs using oscolloscope only Enable pin and > > some data bits. I can see from oscilloscope when enable pin is > > initiate, the data pin voltage will sweep between 3.0V to 5.0 when LCD > > module is switched on, but remain 3.0V when LCD is off. > > Those voltages don't sound right. The logic should swing from 0 to > Vcc. What power supply voltage are you running the CPLD / FPGA at? > 5.0V or 3.3V? What power supply voltage are you running the LCD at? > You're not missing a ground connection somewhere are you? > > > Same IO assignment in those files... > > Ya, I can show you my code! > > But I don't know how to do that! > > Mail the .QAR to you directly? > > Can you put it on a web server and post the URL? > If not, my mail is http://www.holmea.demon.co.uk/IMG/email.gif Hi Andrew, I make a call to local Altera FAE, and he replied that the Vout should be above 2.5V for this FPGA when in high level. LCD module needs 5V power supply, so does the FPGA demo board. I make those measurement again this morning and I see the voltage swing has the same frequency as the Enable signal.. Right now my LPT signals come from a Linux OS system. And the LCD Enable is set to polling every 1 second. I just receive a VHDL code from Altera which show me how to modify the pins into open-drain, and I will try this tomorrow. Also, I will sent you my .QAR files later when I come to office tomorrow.. Thanks for your reply! BR, Johnson LeeArticle: 75509
On Mon, 08 Nov 2004 12:02:41 +0000, Georg Acher wrote: > "Naimesh" <naimesh.thakkar@gmail.com> writes: >>How do I buffer a signal in XST as I feel it is getting overloaded. >>if I write >>Signal1 <= Signal; >>XST automatically optimizes it. > > You can add "-max_fanout 50" (or whatever fanout you want) to the .xst-configfile > or edit the appropriate GUI-form.... That setting is global, but it should be > possible to pass it as an signal attribute. Look at the constraints guide, you can set max fanout for a specific signal or you can do it globally.Article: 75510
Hello, I am not sure if this is the right NG, but since it concerns memory driven by an FPGA, here goes. My question is about burst writes to SDRAM memory (be it standard, DDR or DDR2). Is it possible to sustain a burst write for an undefined number of words? Here is my setup: I have some incomming flow of data arriving at a constant speed of, say 250 MWords/s, which needs to be written to memory in a sequential order, until a Stop signal ends the burst. The length of the flow can be as long as several times the size of the memory, in that case the latter data overwrites the old one. Do SDRAM require dedicated refresh cycles, even if the write cycles will access in turn every possible location in the memory? Alternatively, would there be a way of refreshing a bank while writing into another one, without interrupting the 250 MWrd/s data flow? If this is technically possible, do SDRAM Controller IPs available from FPGA vendors (i.e. Xilinx, Altera) support sustained writes with no gaps in data flow? Any pointers to litterature, memory types, SDRAM controller IPs, would be appreciated. AlexArticle: 75511
Hi, My ISE has stopped working (I use Linux), which I think is probably due to the Wind/U registry or something. When I start "ise", and then try to open a project, I get a dialog saying "Can't access this folder. Path is too long". In the terminal which I used to launch ise, I also get errors such as: Wind/U Error (251): Function RegOpenKeyExA - A fatal registry I/O failure has occurred. A registry daemon may not be running. Restart your application and verify that a registry daemon is running. Any ideas what is wrong, and how to correct it? Thanks...Article: 75512
"Alex Ungerer" <alex.ungerer@chauvin-arnoux.com> wrote in message news:24a8d57b.0411080753.41f0e78d@posting.google.com... > Hello, > > I am not sure if this is the right NG, but since it concerns memory > driven by an FPGA, here goes. > > My question is about burst writes to SDRAM memory (be it standard, DDR > or DDR2). > > Is it possible to sustain a burst write for an undefined number of > words? Here is my setup: > I have some incomming flow of data arriving at a constant speed of, > say 250 MWords/s, which needs to be written to memory in a sequential > order, until a Stop signal ends the burst. The length of the flow can > be as long as several times the size of the memory, in that case the > latter data overwrites the old one. dont expect any available IP core to be able to do this. you will have to get fully custom version of the and SDRAM controller specially made for you application. AnttiArticle: 75513
Hi Antti- FYI, in one of our more recent releases, we've added storage qualification (a more powerful kind of clock enable that you referred to) and ability to handle slow and stopped clocks (complete with the typical "Slow or stopped clock?" message that regular LA's give you). It sounds like you've used OCI tools pretty extensively -- I'd be interested in hearing about your wishlist for OCI tools. For instance, what do you mean about cross-platform? Do you mean a combined SW/HW debug environment? Also, how about the open design idea? Would you like to be able to design your own debug cores or modify existing ones? In any case, we always like to hear from the experts, so keep the comments flowing! Regards, -Brad Antti Lukats wrote: > > As of CS vs SignalTap vs Identify - all are good tools, but I wish there > would be something better. Something that is cross platform and more open in > design - ChipScope doesnt not provide option for low clock or clock enable, > or and well my wishlist is long. So long it might be easier todo by itself > then attempting to use existing tools. > > Antti >Article: 75514
agb wrote: > Hi, > My ISE has stopped working (I use Linux), which I think is probably > due to the Wind/U registry or something. When I start "ise", and then > try to open a project, I get a dialog saying "Can't access this > folder. Path is too long". In the terminal which I used to launch > ise, I also get errors such as: > Wind/U Error (251): Function RegOpenKeyExA - A fatal registry I/O > failure has occurred. A registry daemon may not be running. Restart > your application and verify that a registry daemon is running. > > Any ideas what is wrong, and how to correct it? > Assuming it is the Wind/U stuff, make sure that the Wind/U processes are not running; kill it if it is. Then completely delete the ~/.windu* directories. When you rerun any ISE program, they will be recreated. -- My real email is akamail.com@dclark (or something like that).Article: 75515
Hi all, a little rant, but anyway ... Anyone else on this group is annoyed with the Software Licenses Database at xilinx ? I was trying to get an update for my EDK, and noticed it again. I'm not in the computer ;-) I receive emails all the time from xilinx, because "I'm a registered user". I have a Xilinx Site ID, Xilinx Support ID, a lot of emails with with registration and produkt ID, and suddenly, they don't find anything anymore. Tried to send an email to swreg at xilinx, no response for a week. Xilinx support via phone last thursday, they have to call me back, they don't know why I didn't receive my update. Call to support today, they are sorry, but the system is down ... Please XILINX, please fix it ! BTW, I had to go through the same circus for the 6.1 to 6.2 update already, and thought it was fixed ... /rant So, back to squeeze ns again ;-) CheersArticle: 75516
I'm suspect you'll need regular refreshes. You'll need to raise the internal SDRAM clock rate to allow for refresh cycles and for initiating write cycles... after all, you can only burst write a page at a time. These operations all require multiple clock cycles to perform. You could parallel multiple SDRAMs. SDRAM controller cores will generally let you do that. This increases the number of required connections substantially, but also multiplies the amount of data stored per clock transition and multiplies the amount of data that can be stored per write burst, drastically reducing the necessary SDRAM clock rate for a given data rate. This also reduces the number of refresh cycles that would be required per datum stored. Dwayne Surdu-Miller ------------------------------------------------ Alex Ungerer wrote: > Hello, > > I am not sure if this is the right NG, but since it concerns memory > driven by an FPGA, here goes. > > My question is about burst writes to SDRAM memory (be it standard, DDR > or DDR2). > > Is it possible to sustain a burst write for an undefined number of > words? Here is my setup: > I have some incomming flow of data arriving at a constant speed of, > say 250 MWords/s, which needs to be written to memory in a sequential > order, until a Stop signal ends the burst. The length of the flow can > be as long as several times the size of the memory, in that case the > latter data overwrites the old one. > > Do SDRAM require dedicated refresh cycles, even if the write cycles > will access in turn every possible location in the memory? > > Alternatively, would there be a way of refreshing a bank while writing > into another one, without interrupting the 250 MWrd/s data flow? > > If this is technically possible, do SDRAM Controller IPs available > from FPGA vendors (i.e. Xilinx, Altera) support sustained writes with > no gaps in data flow? > > Any pointers to litterature, memory types, SDRAM controller IPs, would > be appreciated. > > AlexArticle: 75517
Antti, Here's my list! 1) Simulate or die. 2) That's it. Seriously, ChipScope is a great tool for checking whether your hardware is working, perhaps for PCB faults, or level/SI problems. It can also be good for catching problems with live data if your simulation takes a long time, although a good simulation strategy breaks down a big design into smaller blocks which are much more manageable in terms of simulation time. Bob's point about regression testing can't be overstated. It can also be used to capture real data to feed into your simulation to make test benches easier to generate. Finally, I could make ChipScope twice as useful overnight. Add a bloody clock enable to it. Cheers, Syms.Article: 75518
Antti Lukats wrote: > "Nicolas Matringe" <matringe.nicolas@numeri-cable.fr> wrote in message > news:418B3A65.9040700@numeri-cable.fr... > >>Antti Lukats a écrit: >> >> >>>advice: if you do any serious FPGA verification (with Xilinx silicon) > > you > >>>*MUST* use ChipScope - no way around it. There are other OCI solutions >>>availabe of course also, but I would defenetly consider ChipScope as > > primary > >>>tool. >> >>I still wonder why Xilinx is *selling* this tool, especially since you >>can't do much serious work without it. >>Altera's SignalTap is free and (IMO) much more user friendly. > > > You are right - it would much nicer if ChipScope would be free (at least for > those who have ISE full...) I got ChipScope initially as bundled software > with ML300 (total value of purchase >$5000 USD), that CS was version 5.1 and > there was no free update to even 5.2 !! That was bizarre! And the price went > up 2 times what also isnt so nice change. I guess the reason Xilinx is > selling ChipScope is that ChipScope cores, including ILA (not only ATC2) - > are designed by Agilent, so there could be still some ownership issue. This > information is (about who wrote ILA cores) is from inside Agilent so I > assume its correct. Possible that also explains why the core integration > isnt always working as smootly as it could be and why Xilinx still is > struggling to get Chipscope analyzer to work in Linux. > > I have used ChipScope for long time, and sure have a lot of struggle with > it. Its getting better with every service pack. And if you KNOW it you can > use it in very friendly manner. If you dont, well then you have to learn, > possible the hard way. > > I dont want to say that, but when I first time tested SignalTap - I was > really surprised how easy it was! Funny thing is that I used SignalTap to > check out how MicroBlaze works in Cyclone :) > ok, YES, SignalTap is easy (its not directly free as you need use it on a PC > that is required to be online and sends reports back to Altera), SignalTap > doesnt have some features that I use in ChipScope VIO and core generator are > not there. > > Hm, another thing that is missing from ChipScope is upload of user memories! > (SignalTap can do that). > > ok, enough :) > Antti > PS I still have a dream of doing a cross platform OCI system some day > (partial work is completed) Antti, Let me clear a couple points you have brought up. First, the original ChipScope Pro ILA, IBA and VIO cores, along with the Generator, Inserter and Analyzer tools were all developed entirely by Xilinx from the very beginning. The only collaboration with Agilent for deliverables within the ChipScope Pro toolset has been the development of the ILA/ATC and ATC2 cores, which has been shared nearly equally during our partnership. (Agilent, of course, has put a great deal of effort into their Trace Port Analyzer and the new FPGA Dynamic Probe tools.) The information you have received was incorrect. Second, since the product's introduction in March 2000, the price has only gone up once, from $495 to $695, to account for a number of new cores and features. I can't speak for prices of integrated packages, but any increases have not been due to price changes of ChipScope Pro itself. Third, ChipScope Pro is currently (version 6.3i) available for Linux platforms for Core Generation and Insertion, and expect to see Analyzer support soon. Finally, we still charge for ChipScope Pro because we consider it a "value-added" product above and beyond the scope of the ISE toolset, like other tools like EDK, PlanAhead, and System Generator. It is not a required tool (even though you consider it a "*MUST* Have" -- thanks for the endorsement!), and I expect we will follow this model for forseeable future. Thanks for you comments, David Dye Xilinx Technical Marketing Longmont, ColoradoArticle: 75519
Brad, Add at least one clock enable to the main capture clock. The problem with chipscope is it relies on the slowest part of the fabric, i.e. the BlockRams, to store the data. This means a lot of designs that run fast in rest of the fabric are too quick to be analysed with ChipScope. If chipscope had a (or maybe several) clock enable(s) it would be easy to get around this problem. e.g. Instantiate two chipscopes, enabled antti-phase. (Ho ho! Sorry Mr. Lukats! I see, reading back, you've suggested this too.) Cheers, Syms. "Brad Fross" <brad.fross@xilinx.com> wrote in message news:418FA79C.2000705@xilinx.com... > Hi Antti- > > FYI, in one of our more recent releases, we've added storage > qualification (a more powerful kind of clock enable that you referred > to) and ability to handle slow and stopped clocks (complete with the > typical "Slow or stopped clock?" message that regular LA's give you). > > It sounds like you've used OCI tools pretty extensively -- I'd be > interested in hearing about your wishlist for OCI tools. For instance, > what do you mean about cross-platform? Do you mean a combined SW/HW > debug environment? > > Also, how about the open design idea? Would you like to be able to > design your own debug cores or modify existing ones? > > In any case, we always like to hear from the experts, so keep the > comments flowing! > > Regards, > > -Brad > > > > Antti Lukats wrote: > > > > As of CS vs SignalTap vs Identify - all are good tools, but I wish there > > would be something better. Something that is cross platform and more open in > > design - ChipScope doesnt not provide option for low clock or clock enable, > > or and well my wishlist is long. So long it might be easier todo by itself > > then attempting to use existing tools. > > > > Antti > >Article: 75520
"Brad Fross" <brad.fross@xilinx.com> wrote in message news:418FA79C.2000705@xilinx.com... > > Also, how about the open design idea? Would you like to be able to > design your own debug cores or modify existing ones? > That would be great. I wouldn't mind contributing stuff back in, I'm sure that goes for a lot of folks. Best, Syms.Article: 75521
has anyone used the alpha data's ADP-DRC-II board how are the interfacing tools, any problems? -- Geoffrey Wall Masters Student in Electrical/Computer Engineering Florida State University, FAMU/FSU College of Engineering wallge@eng.fsu.edu Cell Phone: 850.339.4157 ECE Machine Intelligence Lab http://www.eng.fsu.edu/mil MIL Office Phone: 850.410.6145 Center for Applied Vision and Imaging Science http://cavis.fsu.edu/ CAVIS Office Phone: 850.645.2257Article: 75522
"Symon" <symon_brewer@hotmail.com> wrote in message news:2v9q51F2j8q2hU1@uni-berlin.de... > Antti, > Here's my list! > 1) Simulate or die. sure Symon! - "sure" translated from my mother tong means "die!" ;) - not kidding. did you look ever at the signal is coming from RocketIO when there is no valid signal applied to the RXN/RXP? It is something NEVER documented in anywhere. It can be analyzed and the result of that can be used to determine if there is some burst or longer period of silence. Without capturing the real data its not possible to implement the logic. Not possible to simulate before you have at least once captured the real signal. There are other simular scenarios where pure simulations (without ever doing FPGA verification at all) will not work. Thats what I reffered too. > 2) That's it. Sure, its the basic rule of digital designs - when you do it right (i.e. when you connect the wires) then it will always just work. From that if it works in simulations, it must work in FPGA? I bet most of us know that it isnt so. Something that works in simulation doesnt necessarily work without any change done in FPGA, by whatever reasons. Its a little bit better with ASIC, FPGA's and FPGA tools have too many things unknown or weird (to make the first FPGA tests always succesful after succesful simulation). > Seriously, ChipScope is a great tool for checking whether your hardware is > working, perhaps for PCB faults, or level/SI problems. It can also be good > for catching problems with live data if your simulation takes a long time, > although a good simulation strategy breaks down a big design into smaller > blocks which are much more manageable in terms of simulation time. Bob's > point about regression testing can't be overstated. It can also be used to > capture real data to feed into your simulation to make test benches easier > to generate. Did I say something very different? If the simulations work, but the hardware isnt then it may be good idea to look whats happening. Optionally capture real signal and improve the testbench, sure. > Finally, I could make ChipScope twice as useful overnight. Add a bloody > clock enable to it. A version of ChipScope that has that improvement and not only that exists already. Well I dont know the release date but its coming. Really! > Cheers, Syms. cheers AnttiArticle: 75523
The Spartan 3 device can be made 5V tolerant by using a series resistor to limit the amount of current flowing into a forward biased clamp diode, when the input voltage exceeds 4V. The question I have is how does this affects the AC timing in light of a diode recovery time when going from forward to reverse voltage. Let's suppose the maximum forward current is 1 ma.Article: 75524
"Symon" <symon_brewer@hotmail.com> wrote in message news:2v9ql4F2j6h6kU1@uni-berlin.de... > Brad, > Add at least one clock enable to the main capture clock. The problem with > chipscope is it relies on the slowest part of the fabric, i.e. the > BlockRams, to store the data. This means a lot of designs that run fast in > rest of the fabric are too quick to be analysed with ChipScope. If chipscope > had a (or maybe several) clock enable(s) it would be easy to get around this > problem. e.g. Instantiate two chipscopes, enabled antti-phase. (Ho ho! Sorry > Mr. Lukats! I see, reading back, you've suggested this too.) > Cheers, Syms. Hi Symon, sorry I commented you on the other thread but Brad had it covered already. YES, I have used ChipScope to capture on 4 phases eg doing internal clock resolution enhancement by a factor of 4, also I have used ChipScope to capture raw Data from RocketIO giving sample rate of 3GS/S, well that application required our custom replacement for the "ChipScope Analyzer". Antti
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z