Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
John_H wrote: > Al wrote: > > If you pass your trigger through the FPGA, your measurement can be off > by +/- hundreds of picoseconds from one measurement to the next. Over > many measurements your gaussian jitter will be increased by a > substantial amount only because of the jitter. Could you tell me why and how the FPGA will affect this measurement? I didn't understand it, as simple as that. > > Using a complex trigger condition through an FPGA doesn't mean you have > to measure the signal coming out of the FPGA. Generate the mask in the > FPGA and use it to gate the original (always on) trigger signal the > measurement is performed on. You chooses the edge(s) to analyze but > through a very clean external gate unaffected by the jitter-laden mask > signal coming from the FPGA. This is correct, but unfortunately the system has a very wide distributed trigger, so this operation has to be done in many places and will result in a components overload. > > If you insist on analyzing the signal leaving the FPGA, please let me > know the company you're working for so I can avoid considering your > products for any future needs. Very kind, but no worries I will not bother you with some time-measurement product, I will mostly bother you with some other questions, waiting for some other useful answers -- Alessandro Basili CERN, PH/UGC Hardware DesignerArticle: 112376
Vangelis wrote: > Does anynone know where can I find VHDL models for DDR-I SDRAM modules? I have an XUP board and I want to run some simulations before downloading my design to the FPGA. I am not looking for a specific model. Any generic DDR-I SDRAM model will do the job. Check out http://www.freemodelfoundry.com/model_list.html For example, mt46v32m8.vhd is a DDR SDRAM chip. PatrickArticle: 112377
radarman schrieb: > Parallax, the company that makes those nifty little BASIC stamps, is > (or was) selling off the last of the Cyclone and Stratix > "Fast/Smartpack" boards. These are pretty utilitarian boards, with a > custom serial programmer & configuration system, a serial line driver, > a clock oscillator, and a handful of LED's. > > They would have been great for embedding in projects had they not been > so ridiculously expensive. It's a shame, really; as I liked the > minimalist approach of just provididing I/O and basic configuration / > clock circuitry. > > At any rate, I bought one of the closeout Stratix Smartpacks with a > EP1S10 on eBay for $76. > > I'm curious to know if any add-on boards were developed for these? > Perhaps a "mainboard" with a bit of RAM, ethernet, etc? I guess Chip got much too involved with his longterm dream (the custom chip - Propeller) that they dropped the FPGA boards from the priduct line. on their website there is info at all any more about the smartpacks so I am afraid the ebay auction was just occasional - i was to buy it but slept through - I need some Stratix based board (anythig with stratix on it) for some verifications, the smart pack have just right but I didnt get, so you can have fun instead :) dont think there are any accessories made, so you just have a board and nothing else AnttiArticle: 112378
Al, I am concerned you may not have the skills to do this right. I suggest you have an engineering review of the proposed architecture with those who know this field. See below, Austin -snip- > I'm sorry Austin I didn't get your point at all. I'm not talking about > delay (and I think you got this), so how can a signal-in signal-out add > 35 picoseconds jitter? You said peak to peak but maybe I didn't explain > what jitter is to me: I use the ITU definition, jitter is the deviation from a perfect reference in the zero crossing(s) of a signal. Into, and out of, any circuit adds white noise. If the signal has a finite rise time (which they all do), the this noise creates timing uncertainty (jitter). > Given a fixed source that we know is stable in time (no matter how) Good trick. How do you define stable? A native rubidum cell? A double oven crystal resonator? A SAW oscillator? What frequency? What is the reference's time interval error (TIE)? A stable reference is an oxymoron (contradiction in terms), as with any reference, there are imperfections. and > a signal produced from this source with some combinational logic and > delay (like cables and I/O delay), the output distribution will be > gaussian if we have a white-noise environment. The jitter I'm talking > about will be (basically) the sigma of this distribution. Averaging over time will allow you to get a better measurement, but only if you can obtain many such samples, and there are no deterministic causes of jitter (of which there are many, such as power supply noise, or other signals switching, or pattern jitter from inter-symbol interference, and the list never seems to end). >> An ASIC is probably the last thing I would choose to do jitter >> measurement. As I have said, you do anything wrong (at all), and you >> will fail. > > Is that mean that all these ASIC TDC you find around are just junk? They > are ASIC, nothing more, just dedicated device to measure Time. There are > basically two types of TDC AFAIK: If someone has something that already works, and is proven, by all means use it and stop trying to re-invent the wheel. > 1) time expantion based: which is an amplitude measurement that is > proportional to a time measurement > 2) Calibrated standard-cell delay to shift in the value. > > In the latest the measurement is quite more precise because typically > the time-expansion circuitry is an analogue circuit which has a much > worse stability then an integrated standard-cell delay. As I said, if you already have something that works, please don't use a FPGA as your level of understanding appears to be a recipe for disaster. Building such circuits is an art form. > Sorry I didn't understand this as well, what do you mean by > >> Jitter is the result of converting amplitude variations into >> phase variations Since every signal has a rise time, and every circuit switches at some threshold, and every circuit has noise, and every signal also has noise, the instant at which the circuit switches varies from rising edge to rising edge. Thus any individual edge has uncertainty. Variations in amplitude (the noise), creates variation in time (phase). >> >> To resolve the time you desire, it requires very high speed design >> (PECL), and virtually perfect power distribution, and signal integrity. > > > I do agree power distribution has a major effect on the time > measurement, but that's why "calibration procedures" have been invented. > You basically subtract (is a deconvolution operation to be precise, even > though many physicists deny it) the environment noise from the > meauserements. This operation is quite complicated because you need to > insure that power consumption doesn't vary so that will affect the > measurement. Yes. And then, magic occurs (?). Signal processing may remove many noise like signatures, but you obviously recognize that not all noise is random, and it is those components that add the ultimate uncertainty. Again, I thought you had to time stamp events, and as such, you did not have millions to average over. If you have millions of events, then you may use any technology you like, as long as you are able to remove/preveent the deterministic components, and there are enough samples to obtain the required certainty (resolution). I prefer to start with as good a signal as I can, and as good a reference as I require, rather than try to fix something when I am done. (?) If not everyone agrees that this works, then you have a much larger problem. It also confrims my suspicion that you are lacking some fundamental understanding of signals and noise.Article: 112379
radarman wrote: > already5chosen@yahoo.com wrote: > > radarman wrote: > > > > > This is all well and good, but even with smart compilation turned on, > > > things don't work quite as well as they should. I'm designing my own > > > CPU core, so I'm not using NIOS, but I still need to be able to update > > > BRAM's with new program code. > > > > > > I've already worked around (sort of) the fact that for some reason, > > > Quartus won't properly update a BRAM from a .hex file. (.hex files work > > > fine for compilation, but not a mif/hex update. So, I bring my .hex > > > file into Quartus, and save it as a .mif file. This is annoying, since > > > I can no longer automate the whole build process. At some point, when I > > > get irritated enough, I'll write a program that converts hex to mif > > > correctly, but I haven't had the time for much extra programming > > > lately. > > > > > > However, the MORE irritating problem I've run into is that if I repeat > > > the mif/hex update process a second time, changing only the .mif file, > > > Quartus launches into a full recompile anyway. This is a bit > > > frustrating, since the "hardware" is stable at this point. I just need > > > to test software. Perhaps I've missed something, but I've yet to find a > > > way to avoid the full recompile on the second update. > > > > > > Maybe this works correctly in the NIOS flow, but I don't have that tool > > > available. I did try the NIOS eval, though, and I didn't see any new > > > tools in it that looked like they would solve my problem. > > > > > > What version of Quartus? Service pack? Critical update? > > Sounds like you were hit by that: > > http://www.altera.com/support/kdb/solutions/rd03062006_72.html > > > > BTW, if you want to minimize troubles please store you .hex files in > > Quartus project directory. > > I first noticed the problem in 5.1 SP2, but have since updated to 6.0 > SP1. I already store the init files in the project folder - it's > automatically copied as part of the build scripts. > > Note, my projects aren't failing (they run properly), I just have to > sit through a full recompile every second update. > > However, this might explain some curious behavior I was seeing before I > upgraded to 6.0. My processor would occasionally hang on the first JSR > instruction after doing an incremental compilation. I suspected that > the memory wasn't workiing correctly, but it never occurred to me that > the tool might be at fault. I thougt that perhaps I had set something > up incorrectly - since the ROM appeared to be working just fine. (my > design has a 16kB instruction ROM, and 32kB data RAM - both megawizard > created) > > That support answer appears to explain what was going on. I'd recommend to drop your MIF workaround and return to the HEX-based flow. In theory, HEX and MIF are the same. In practice HEX-based updates are more likely to work correctly because they are part of normal Nios design cycle. I personally avoid unusual things even if in theory they are supposed to work. Don't want to waste my time debugging Altera tools. YMMV.Article: 112380
I am trying to simulate PCI Express Endpoint Block 1.1 generated by coregen in ModelSim SE. I get the following errors Error: ../../src/pcie_top.v(2153): Module 'PCIE_EP' is not defined Error: ../../test_bench/pcie_ne.v(1221): Module 'PCIE_INTERNAL_1_1' is not defined I have Installed the following as per Answer Record 9795 ISE 8.2, service pack (8_2_03i_lin.zip), IP update 2_3 (ise_82i_ip_update2_3.zip), Virtex-5 LXT Device (8_2_03i_v5_lx330_lxt.zip) and IP Update 2 LXT supplement 1_1 (ise_82i_ip_update2_lxt_sup_1.zip) Also I have installed all the smartmodels from the following two locations $XILINX/smartmodel/lin/image and $XILINX/virtex5/smartmodel/lin/image Following are the models in my library ======================================== For platform: x86_linux ======================================== ======================================== Report of all versions of all models in the Library: ======================================== model: dcc_fpgacore_swift versions: 02402 model: emac_swift versions: 01022 model: glogic_adv_swift versions: 01004 model: glogic_swift versions: 04001 model: gt10_swift versions: 02221 model: gt11_swift versions: 01016 model: gt_swift versions: 01602 model: gtp_dual_swift versions: 00008 model: pcie_internal_1_1_swift versions: 00017 model: ppc405_adv_swift versions: 01010 model: ppc405_swift versions: 04003 model: temac_swift versions: 00002 12 total model/versions found Am I missing any thing I need? -SovanArticle: 112381
> I know about all you said above. but you missed the point. > > in Xilinx flow you can run a tool like: > > data2mem system.bit software.elf download.bit > > it will take the FPGA design (with soft cpu) and merge the elf file > __into__ it. > > in Lattice flow you *can* do the same. > > in Altera flow this is ASFAIK __not_possible__ at all. > > What I need a very simple thing > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > my_design.sof + my_software.elf => my_ready_to_program.sof > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > I see no other way as doing full RE on the SOF to accomplish that. > its stupid, and I would REALLY like to use Altera tools to do that, > but if Altera is not able to provide such an important tool/option > then someone has todo it (I really would prefer to spend my time > with other things than doing RE on altera file formats) > > Antti Hi Antti, (Sorry if someone else has already posted what I'm about to say). Sorry, I did not completely understand you the first time, but I did allude to what you need to do (or, well, part of it) in my post: > > - Although the IDE-based flow doesn't do this now, you can even update > > your .sof > > file very quickly with onchip ram contents without risk of triggering > > an entire > > re-compile. I cannot recall the exact syntax of the command but I > > believe the compilation > > step is the Quartus Assembler (quartus_asm) I've gone and found the missing step. These **should** work, but I don't have immediate access to verify in hardware right this moment. Please let me know if you run into trouble: 1. Build your starting .elf, .sof as you are now 2. Run elf2hex (see my note in my original post about using the IDE to learn the command line args of this tool) on the .elf. This will produce a hex file with memory contents for your software (code, data, basically anything linked into the .elf at the address span of the onchip ram you want to target. 3. Run the following: quartus_cdb --update_mif <your quartus project name>.qpf quartus_asm <your quartus project name>.qpf That's all there is to it. So, there are three commands to type, and not one (sorry), but this should accomplish what you want; there is no full re-compile (synthesis, fitting) of your software, just an "in place" update of memory contents. Now for the caveat: The elf2hex command is pretty monolithic; if you have a multi-CPU system where each CPU linked code/data to a common memory you'd have problems since each .hex file is written out "for the whole ram". Hand editing solves this case, as does splitting up such a complex design into smaller individual onchip rams (which to me is the most logical approach). While I cannot comment on specific future product plans, I will say that this has been noticed before and we'll certainly look at automating this. Jesse Kempa Altera Corp. jkempa --at-- altera --dot--comArticle: 112382
"Al" <alessandro.basili@cern.ch> wrote in message news:ejv6de$rrt$1@cernne03.cern.ch... > John_H wrote: >> Al wrote: >> >> If you pass your trigger through the FPGA, your measurement can be off by >> +/- hundreds of picoseconds from one measurement to the next. Over many >> measurements your gaussian jitter will be increased by a substantial >> amount only because of the jitter. > > Could you tell me why and how the FPGA will affect this measurement? I > didn't understand it, as simple as that. This is your input signal: ___________.------------------ This is your input signal through an FPGA (scope with persistence): ________........--------------- Sometimes it's this: ________.--------------------- Sometimes it's this: _____________.---------------- You cannot use a signal that's all over the place to measure time intervals precisely. >> Using a complex trigger condition through an FPGA doesn't mean you have >> to measure the signal coming out of the FPGA. Generate the mask in the >> FPGA and use it to gate the original (always on) trigger signal the >> measurement is performed on. You chooses the edge(s) to analyze but >> through a very clean external gate unaffected by the jitter-laden mask >> signal coming from the FPGA. > > This is correct, but unfortunately the system has a very wide distributed > trigger, so this operation has to be done in many places and will result > in a components overload. If the master edge that you want to measure is one of a very large number, you're sunk. If the master edge is one very clean signal that you only want to look at occasionally based on the wide, distributed trigger, you can generate that mask in the FPGA and use a clean external switch - such as Symon's PIN diode switches - to bring the edge to your analog phase latch/measurement circuit. The jitter in the gate will affect your measurement little. The uncertainty on the signal passed by the gate is what you're concerned about. >> If you insist on analyzing the signal leaving the FPGA, please let me >> know the company you're working for so I can avoid considering your >> products for any future needs. > > Very kind, but no worries I will not bother you with some time-measurement > product, I will mostly bother you with some other questions, waiting for > some other useful answers It seems the lot here is convinced that you shouldn't use FPGAs to generate a signal to do precision time measurements. You should have expertise within your organization that can help underscore the issues you'e facing. If the number of responses on this newsgroup telling you not to go down the road you're travelling without reevaluating your path doesn't convince you that you should reevaluate your path, you need the face-to-face interaction that will help you understand that you cannot achieve your goals without rethinking your approach. FPGAs are *exceptional* logic devices. They are NOT designed as analog elements. If you need a clean edge, you need analog level functionality that FPGAs are not designed to deliver. > -- > Alessandro Basili > CERN, PH/UGC > Hardware DesignerArticle: 112383
Austin Lesea wrote: > Al, > > I am concerned you may not have the skills to do this right. I suggest > you have an engineering review of the proposed architecture with those > who know this field. > That's why I post the topic in the first place. > > I use the ITU definition, jitter is the deviation from a perfect > reference in the zero crossing(s) of a signal. > > Into, and out of, any circuit adds white noise. If the signal has a > finite rise time (which they all do), the this noise creates timing > uncertainty (jitter). > agree. >> Given a fixed source that we know is stable in time (no matter how) > > > Good trick. How do you define stable? A native rubidum cell? A double > oven crystal resonator? A SAW oscillator? What frequency? What is the > reference's time interval error (TIE)? A stable reference is an oxymoron > (contradiction in terms), as with any reference, there are imperfections. I misused the term "stable" but any time measurements need a reference in time and what i meant was supposed to fix to time zero this source signal. >> a signal produced from this source with some combinational logic and >> delay (like cables and I/O delay), the output distribution will be >> gaussian if we have a white-noise environment. The jitter I'm talking >> about will be (basically) the sigma of this distribution. > > > Averaging over time will allow you to get a better measurement, but only > if you can obtain many such samples, and there are no deterministic > causes of jitter (of which there are many, such as power supply noise, > or other signals switching, or pattern jitter from inter-symbol > interference, and the list never seems to end). > Any measurement needs statistics and for not related events the error that you will have over a distribution of points is 1/(number of points) no matter what tool you have to measure it. Starting from the point that my signal source will not have such deterministic noises (that is just to reduce the parameters, but the source will have noises) all my electronics to measure it will add some gaussian and not-gaussian noise to the ultimate value and this is quite straight forward. But I don't see how the inter-symbol interference will affect my one-shot system (as someone called it), not even pattern jitter. Power supply is an issue, surely, but we can get rid of it :-) > > If someone has something that already works, and is proven, by all means > use it and stop trying to re-invent the wheel. Even if sometimes this is the approach of some collegues of mine, I surely won't reinvent the wheel and that's why I'm using an HPTDC with 24.4 ps bin resolution, but still someone has to feed this guy with some signals and this is the reason of my post. > > > As I said, if you already have something that works, please don't use a > FPGA as your level of understanding appears to be a recipe for disaster. > > Building such circuits is an art form. > I agree and I'm not such an artist, that's why I ask questions and follow reasonable suggestions. > Since every signal has a rise time, and every circuit switches at some > threshold, and every circuit has noise, and every signal also has noise, > the instant at which the circuit switches varies from rising edge to > rising edge. Thus any individual edge has uncertainty. > > Variations in amplitude (the noise), creates variation in time (phase). > I would say is one component of time variations, moreover there is even bulk voltage variations which may involve different time variations and even local temperature distribution which may affect time measurement. > Yes. And then, magic occurs (?). Signal processing may remove many > noise like signatures, but you obviously recognize that not all noise is > random, and it is those components that add the ultimate uncertainty. > > Again, I thought you had to time stamp events, and as such, you did not > have millions to average over. If you have millions of events, then you > may use any technology you like, as long as you are able to > remove/preveent the deterministic components, and there are enough > samples to obtain the required certainty (resolution). > A time measurement, as any measurement to me, needs a lot of event to get rid of many effects. The number of measurements you need to do is directly related to the binning you want to make on any value. > I prefer to start with as good a signal as I can, and as good a > reference as I require, rather than try to fix something when I am done. > There are some issues which is important to have them fixed before, some others can easily be fixed later, as time-walk for instance. > (?) If not everyone agrees that this works, then you have a much larger > problem. It also confrims my suspicion that you are lacking some > fundamental understanding of signals and noise. I can confirm you I lack of many fundamentals. Thanks for your warnings Al -- Alessandro Basili CERN, PH/UGC Hardware DesignerArticle: 112384
Al schrieb: > Of course all this logic is a combinational logic and has nothing to do > with clock, but is it true that this logic cell delay will be changed by > the presence of the clock inside the chip? even if this clock will not > be connected to the combinational cell? Yes. That is were the INL in the HPTDC comes from. The delay elements are sensitive to the power supply voltage and the power supply voltage inside the chip changes when a lot of flip-flops are switching. I found that very interesting because this effect usually is next to impossible to measure in an IC. With the HPTDC you get a very detailed waveform view of the power supply voltage under stress ;-) Kolja Sulimma www.cronologic.deArticle: 112385
I had a similar problem with the S3 kit while ago. The problem somehow disappeared when i did the following: Decrease the programming frequency, then run a script that blanks flash and then programms it _twice_ before verification. bruns J=FCrgen B=F6hm wrote: > Hi, > > I have a problem in uploading an .mcs file from the Spartan 3 Starter > Kit reference designs to the flash prom. > > Specifically it is the design "Micro Blaze Master System" from > > www.xilinx.com/products/boards/DO-SPAR3-DK/reference_designs.htm > > I use Impact from the ISE WebPack vers. 8.1 and follow the standard > prescription for uploading .mcs files which - for example - always works > with the factory default .mcs or the .mcs I produced from own designs. > Concretely I assign the chosen .mcs file to the flash prom with Impact, > select the prom type xcf02s and then execute "program" with "verify" > selected. > > But in the case mentioned above ("Master System") this gives me an error > in the verification phase after the upload. The error is accompanied by > a message "non contiguous address" (or the like). > > What could be the reason for this ? Hardware fault ? Incompatibility > with newer Impact ? Wrong procedure on my side ? > > To check for hardware problems I tried "blank" with "verify empty" on > the prom with Impact, this went through without error. > > I would be very happy if someone could help me resolve this problem, so > that I can test all features of the board with this example design. > > Greetings > > J=FCrgen > > -- > J=FCrgen B=F6hm www.aviduratas= .de > "At a time when so many scholars in the world are calculating, is it not > desirable that some, who can, dream ?" R. ThomArticle: 112386
Al schrieb: > Kolja Sulimma wrote: >[...] >> Could also be from MSC in Darmstadt, but as he has a CERN email address >> I am sure he is using the HPTDC developed at CERN. The HPTDC homepage >> has vanished, but we use it in one of our TDC boards: >> http://cronologic.de/products/time_measurement/hptdc/ > Exactly! > >> >> >>>> Can anyone say something about this? Does it sound reasonable? >> >> >> >> Slow input slopes create crosstalk in the HPTDC. Therefore it makes >> sense to have extremely fast LVDS input buffers in front of the chip >> anyway. If you use buffers with enable (or an AND-gate) you can control >> that from the FPGA to mask the signals. No need to route the signals >> through the FPGA. > > > The big problem is that, as in all time measurements in physics, there > will be a "trigger" configuration which will allow the time conversion. > This will need to be implemented in an FPGA, because different "trigger" > configurations will be needed. > Because of that all the signals will come from an FPGA or from a > combinational logic anyway. Why is that? You can fan out the signals and perform the trigger decision based on a copy of the signals. If you data rate does not saturate the HPTDC you can do what we do: We digitize everything and then let the FPGA do the trigger decision on the measurement data. > After that we can use all the drivers we want to minimize later sigma > increase on the measurement, but still the source will jitter. > My initial question was about the jitter increase due to the presence of > a clock signal running through the FPGA, not that this clock source will > have anything logically related to the output signals to be measured. > We are getting signals from PMTs (PhotoMultiplierTubes), so they are > single ended signals and there is no such a gain to convert them in LVDS > signal and then convert them again to TTL inside the HPTDC. All these > intermediate stages will drammatically add their sigma worsening the > overall measurement. It depends on the distance the signals run externally. The noise induced in the LVDS signals will be much lower than that on the TTL signals. At a certain point this outweights the converter jitter. Actually the converter jitter is low if the power supply is good. As the HPTDC inputs are not very good in our case the results got MUCH better with the converters added in front. > I saw your PCI board with the HPTDC installed, wich type of LVDS > drivers did you inserted in between NIM and HPTDC? MAX9378EUA > We are using a configuration such that to use the TTL port of the HPTDC > and a fast comparator with a configurable threshold and an > amplitude-time correction algorithm to correct the time-walk errors on > different amplitude signals. No constant fraction discriminator? What do you get the amplitude from? Pulse length? I guess now we are getting OT, maybe we should continue in a private conversation. Kolja Sulimma www.cronologic.deArticle: 112387
Utku Özcan wrote: > ... > Tell me from which university and departmant you are, and also your > name, then I can answer your question. Of course for free ;-) > ... The message contained: Path: uni-berlin.de!fu-berlin.de!news.maxwell.syr.edu!postnews.google.com!f16g2000cwb.googlegroups.com!not-for-mail So you already know the university :) AndreasArticle: 112388
kempaj@yahoo.com schrieb: > > I know about all you said above. but you missed the point. > > > > in Xilinx flow you can run a tool like: > > > > data2mem system.bit software.elf download.bit > > > > it will take the FPGA design (with soft cpu) and merge the elf file > > __into__ it. > > > > in Lattice flow you *can* do the same. > > > > in Altera flow this is ASFAIK __not_possible__ at all. > > > > What I need a very simple thing > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > > > my_design.sof + my_software.elf => my_ready_to_program.sof > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > > > I see no other way as doing full RE on the SOF to accomplish that. > > its stupid, and I would REALLY like to use Altera tools to do that, > > but if Altera is not able to provide such an important tool/option > > then someone has todo it (I really would prefer to spend my time > > with other things than doing RE on altera file formats) > > > > Antti > > Hi Antti, > > (Sorry if someone else has already posted what I'm about to say). > Sorry, I did not completely understand you the first time, but I did > allude to what you need to do (or, well, part of it) in my post: > > > > - Although the IDE-based flow doesn't do this now, you can even update > > > your .sof > > > file very quickly with onchip ram contents without risk of triggering > > > an entire > > > re-compile. I cannot recall the exact syntax of the command but I > > > believe the compilation > > > step is the Quartus Assembler (quartus_asm) > > I've gone and found the missing step. These **should** work, but I > don't have immediate access to verify in hardware right this moment. > Please let me know if you run into trouble: > 1. Build your starting .elf, .sof as you are now > 2. Run elf2hex (see my note in my original post about using the IDE to > learn the command line args of this tool) on the .elf. This will > produce a hex file with memory contents for your software (code, data, > basically anything linked into the .elf at the address span of the > onchip ram you want to target. > 3. Run the following: > quartus_cdb --update_mif <your quartus project name>.qpf > quartus_asm <your quartus project name>.qpf > > That's all there is to it. So, there are three commands to type, and > not one (sorry), but this should accomplish what you want; there is no > full re-compile (synthesis, fitting) of your software, just an "in > place" update of memory contents. > > Now for the caveat: The elf2hex command is pretty monolithic; if you > have a multi-CPU system where each CPU linked code/data to a common > memory you'd have problems since each .hex file is written out "for > the whole ram". Hand editing solves this case, as does splitting up > such a complex design into smaller individual onchip rams (which to me > is the most logical approach). > > While I cannot comment on specific future product plans, I will say > that this has been noticed before and we'll certainly look at > automating this. > > Jesse Kempa > Altera Corp. > jkempa --at-- altera --dot--com hi Jesse, what you describes sounds like a process one step "too early". in Xilinx flow it is possible to use bitgen (== quartus asm) for memory init. bitgen takes as input NCD file, not bitstream. but there is another tool data2mem, that takes the as input: bitstream and produces as output another bitstream. so this flow *IS* missing in altera chain. I want to run Quartus to produce SOF the use this SOF __outside__ the project directory without any qaurtus intermediate files to be merged with software object to generate another SOF that now includes the software. this sounds like not possible with altera tools the opensource development is progress so maybe a 3rd party tool will be released before altera adds this option to the quartus flow. [snipped here removed] AnttiArticle: 112389
John_H wrote: > It seems the lot here is convinced that you shouldn't use FPGAs to generate > a signal to do precision time measurements. You should have expertise > within your organization that can help underscore the issues you'e facing. > If the number of responses on this newsgroup telling you not to go down the > road you're travelling without reevaluating your path doesn't convince you > that you should reevaluate your path, you need the face-to-face interaction > that will help you understand that you cannot achieve your goals without > rethinking your approach. > I do see the warning and I do am worried, just because this part of the design is already been designed by somebody else and I didn't want to take over it again. Tipically my approach is such that if someone did something he had his motivations to do that, the same as all these replies, that's why I'm trying to evaluate why and most important how much this will affect my project. That's why I'm reading carefully all these posts and try to evaluate somebody else's experience, I cannot simply trust (at least it's out of my attitudes). Thanks for your explanation, by the way "why and how the FPGA will affect this measurement" is still an open issue to me. Thanks Al -- Alessandro Basili CERN, PH/UGC Hardware DesignerArticle: 112390
Try the following from the command-line: quartus_cdb <rev> --update_mif quartus_asm <rev> where <rev> is the base name of your QSF file (i.e. the revision). The --update_mif should take the post-fit results and update any MIF and/or HEX file(s). You then run the assembler to re-generate the SOF. So, at least from the command-line, you do not need to do a full compile and you do not need smart recompile. If this does not work with HEX files (as it should), please file a service request or send me an email as we would need to debug and fix this. Thanks, -David Karchmer Altera radarman wrote: > kempaj@yahoo.com wrote: > > On Nov 17, 6:50 am, "Antti" <Antti.Luk...@xilant.com> wrote: > > > I just cant belive it, its one of the most useful things for the FPGA > > > SoC designs, and its just not there? I really doesnt have time or fun > > > to reverse engineer the .SOF format only to be able write the data2sof > > > utility for Altera. > > > > Antti, > > > > Others have commented on the general-purpose case, but since you made a > > specfiic reference to processors its worth discussing the soft-CPU flow > > for > > placing your code/data into onchip ram. > > > > No, this wasn't forgotten. In fact, support for doing this has been > > around about > > as long as Nios I/Nios II have been (6+ years now?). There are even > > several ways > > to accomplish the task: > > > > - If you are building your Nios II software in the IDE, it will take > > any coce/data linked into > > an onchip memory peripheral and use the elf2hex command to create a hex > > initialization > > file. The onchip RAM RTL generated by SOPC Builder is written out to be > > initialized this > > way; if you compile your design w/o any software having been built, > > memory will be left > > un-initialized, while if you first compile your software and then > > (re)compile in quartus, > > the .hex file(s) are used to initialize the memories. > > > > If you turn on verbose command line output from the IDE (window -> > > preferences -> nios II), > > you'll see the precise commands fly by on the console for future > > reference and command- > > line use. > > > > - Although the IDE-based flow doesn't do this now, you can even update > > your .sof > > file very quickly with onchip ram contents without risk of triggering > > an entire > > re-compile. I cannot recall the exact syntax of the command but I > > believe the compilation > > step is the Quartus Assembler (quartus_asm) > > > > - The "small" example design that ships with Nios II uses the above > > IDE-based > > method to initialize onchip RAM and as I recall the design's readme and > > other > > literature discuss this. > > > > Note that there are exclusions to what I've said, specifically for the > > types of > > onchip ram (m-ram blocks in Stratix, Stratix II) that cannot be > > initialized until > > runtime. The wizard to create an onchip ram in SOPC Builder allows you > > to > > choose which type of ram block will be used, if you desire, to ensure > > that you > > can pre-initialize contents if that is what you need to do. > > > > Jesse Kempa > > Altera Corp. > > jkempa --at-- altera --dot-- com > > This is all well and good, but even with smart compilation turned on, > things don't work quite as well as they should. I'm designing my own > CPU core, so I'm not using NIOS, but I still need to be able to update > BRAM's with new program code. > > I've already worked around (sort of) the fact that for some reason, > Quartus won't properly update a BRAM from a .hex file. (.hex files work > fine for compilation, but not a mif/hex update. So, I bring my .hex > file into Quartus, and save it as a .mif file. This is annoying, since > I can no longer automate the whole build process. At some point, when I > get irritated enough, I'll write a program that converts hex to mif > correctly, but I haven't had the time for much extra programming > lately. > > However, the MORE irritating problem I've run into is that if I repeat > the mif/hex update process a second time, changing only the .mif file, > Quartus launches into a full recompile anyway. This is a bit > frustrating, since the "hardware" is stable at this point. I just need > to test software. Perhaps I've missed something, but I've yet to find a > way to avoid the full recompile on the second update. > > Maybe this works correctly in the NIOS flow, but I don't have that tool > available. I did try the NIOS eval, though, and I didn't see any new > tools in it that looked like they would solve my problem.Article: 112391
MM schrieb: > "Antti" <Antti.Lukats@xilant.com> wrote in message > news:1163888792.295560.43610@k70g2000cwa.googlegroups.com... > > > > two blocks only work in case of 128MB space that is made of 2 times 64kb > > but if i have say 32kb + 8kb then the second one is not working :( > > IIRC it only works when the blocks are of the same size.... So, if you want > 40K, you might need to make it out of 5 8K blocks... > > /Mikhail 1 same size rquirement is BS, the tool should tolerate also different sized blocks 2 and.. same size blocks dont work either, only first one gets PLACED, second stays unprocessed so its just another Xilinx bug, AnttiArticle: 112392
dkarchmer@gmail.com schrieb: > Try the following from the command-line: > > quartus_cdb <rev> --update_mif > quartus_asm <rev> > > where <rev> is the base name of your QSF file (i.e. the revision). > > The --update_mif should take the post-fit results and update any MIF > and/or HEX file(s). You then run the assembler to re-generate the SOF. > So, at least from the command-line, you do not need to do a full > compile and you do not need smart recompile. > > If this does not work with HEX files (as it should), please file a > service request or send me an email as we would need to debug and fix > this. Thanks, > > -David Karchmer > Altera > dont worry so much - Xilinx has been fixing issues with their data2mem for years and they still have not managed to get it right. well data2mem like tool is completly missing in Altera flow so Altera cant have it wrong (as it is missing) AnttiArticle: 112393
Ray Andraka wrote: > PeteS wrote: > >> Ray Andraka wrote: >> >>> John Larkin wrote: >>> >>> >>>> >>>> They should back off this religious devotion to fully synchronous >>>> logic and give us a couple of dozen programmable true delay elements, >>>> scattered about the chip. But they won't because it's not politically >>>> correct, and because they figure that we're so dumb that we'd get into >>>> trouble using them. >>>> >>>> >>>> John >>>> >>> >>> Virtex4 has them. They are the idelay elements, which give you 64 >>> steps of delay with 75ps granularity. >> >> >> That's great, but for those of us using lower power [and on a lower >> budget ;) ] devices, I could wish for the tools to obey my >> requirements, but that would be a 'vendor specific solution' unless I >> could get Verilog / VHDL changed to have a keyword where the synthesis >> / PAR tools would understand that 'this is not to be optimised in any >> way' on a perhaps module basis, rather than on a global basis (WYSIWYG >> comes to mind). >> >> I've even gone to the extent of specifically instantiating primitives, >> and PAR *still* optimises them out, even though there's plenty of room >> for the logic, and I really wanted to do it that way for various >> reasons. I do not want the tools to try and think for me; software >> that tries to be that smart only succeeds in being dumb. I have >> written a *lot* of software, and I learned that lesson the hard way. >> >> As John already noted, the fervency around synchronous design is >> almost religious. As I note in another post, it's preferable *most of >> the time*, but there are times it actually prevents me from doing >> things properly. One could hope the FPGA vendors and tool vendors >> might listen to experienced pure hardware designers on this. >> >> Cheers >> >> PeteS > > If you really feel the need for async design, it can be done with utmost > care on an FPGA, but such a design has to take into account the routing > delays. The tools do not support specifying the routing delays beyond > that needed to satisy synchronous operation, and for a very good reason > (I'll address that in a moment). You can keep the tools from optimizing > out pieces you need through addition of keep attributes. You can also > instantiate primitives, which the tools generally do not remove (but you > need to be careful with simple inversions or buffers), or you can create > routed macros with FPGA editor. The hooks are there for manually doing > your async design, and I have used them on the rare occasion. It is > tedious and error prone, but everything you need to do it is there. > > That said, the FPGAs and tools are targeted to the sync design audience, > as that is the bread and butter of the industry. The analysis and > design is far easier to get correct with synchronous design rules, and > as a result the tools are far easier to design and use for those cases. > Since the synchronous design domain represents the large majority of > all digital design, and the tools for doing it are lower hanging fruit > than what is required for async design, it is natural the FPGA vendors > only target that market, and to not have any interest in the async > market. The cost to entry is too high: tools are much harder to design > and verify, there are many more gotchas that need to be covered and > tested for, there is a potential for far more technical support needs, > and on top of all that a market that is less than a percent of the total > market. Why would any sane vendor even pursue that when there are so > many other easier to obtain ways to make money? Thanks for answering, Ray :) I understand why the vendors don't specifically target the async market, although I will say that the software developers who do the tools don't always appear to truly understand hardware (it covers so many sins, I am not surprised, nor is it a complaint, more an acknowledgment). As to making money, that is, to a great extent in any market, dependent on customer goodwill. Making primitives that won't be optimised away and not protecting me from myself when I so ask will make me far more likely to use such a vendor again, because they have listened to my requirements, and responded that such things are avialble. I take the responsibility if it goes wrong, *provided* I get proper timing output data from the tools. I'm not asking for autoplacement and guaranteed constraint mapping, just a report that tells me the various timing elements - then it's up to me. I once implemented a binary search on file data (before the days of huge virtual memory and large memory in general) where I wrote the function and manipulated the file pointer directly (those fun functions that use SEEK_SET, SEEK_END, SEEK_CURRENT etc). During development I trashed a lot of file data (that's why we always use a test file, not the live ones ;) but the point is I am willing to do the work to make things work the way I need them to work. Cheers PeteSArticle: 112394
We've had some very bizzare problems creating OPB bus masters to burst data to/from the DDR on a VP4 Mini-Module while running VxWorks on the PPC. Xilinx support concluded there was some sort of timing problem with the bus itself, but was unable to provide a fix. We also heard of similar OPB bus master woes from a group at UC Berkeley. We ended up using PLB masters, and that seems to work. MikeArticle: 112395
Andreas Ehliar wrote: > It is also possible to synthesize parts of your design and then include > that part as presynthesized logic. In this way, XST synthesizes parts of > your large design separately. However I don't know how to do that in ISE as > I haven't tried to do that myself. The makefiles in the EDK seems to work > in that way as well. > > /Andreas Hi Andreas, I did not have the internet over the weekend - so I am late in looking at your posts. I looked over the documentation on ISE, but I did not figure out how to include pre-synthesized logic. If anyone else has some ideas that might be helpful. I have seen the *.edn files included into the synthesis of some other designs. But I am not sure how they help i.e. if they are a netlist that is post synthesized or pre-synthesized. Also I am not sure how to generate *.edn files. Thanks for the pointer anyway. Regards, O.O.Article: 112396
Dear Thomas, Thomas Stanka wrote: > Hi, > > olson_ord@yahoo.it schrieb: > > > The number of gates _will_ increase, if you add gates! > > > I guess you even didn't care about signal buffering. A signal driving > > > 10k gates needs roughly 1000 gates for signal buffering. > > > > I found this comment useful - because I could not get this information > > in the manuals or other tutorials. (I never imagined so many gates > > would be required for buffering.) > > This number depends on the maximum fanout allowed for buffers of your > library. If your technology allows a max fanout of 10 per Gate, you > need more than 1k buffers to have a overall fanout of 10k for a signal. > If your technology allows a maximum fanout of 32, you need only 324 > buffer. > But you have typically massive timing problems when your actual fanout > reaches max. fanout. > In some FPGAs you need to use special routing resources (clk-nets) for > a large fanout. those nets are typically strong limited in number. > One thing I would like to clarify is that even though I am using 10K gates in my design - a single signal does not drive all of them. In fact I wonder if there are more than a few signals that actually drive more than 10 gates. So I don't think that there is a need for signal buffering. > The task of synthesis is NP, meaning your effort rises exponential with > the number of involved variables. I would consider each gate-input as > variable for synthesis (with redundant inputs hopefully merged) . > Second problem would be place&routing to fit timing. The more gates > drive an logic cone, the more inputs have to be considered to place and > route this logic cone. > > bye Thomas Does this mean that these commercial tools have problems synthesizing medium size combinational circuits? Would it help the synthesis if I would somehow insert flipflops at an intermediate location in the logic? (This is near impossible for me - but right now the synthesis using commercial tools does not work at all.) Thanks a lot. O.O.Article: 112397
> If you are just trying to build a trainer of some sort, wouldn't it be > easier to build a simulation that could show all the internal states? > It has got to be easier to do a simulation than a construction project > from transistors or even from CPLDs. I guess its because I'm in Alaska? The largest state in the union mentality causes me to want to build a computer from transistors? :) I just find it so interesting that it can be done. Why not make a kit! :) Wouldn't a second box under the altair with a few hundred lights be neat to watch? I really want to build a computer from relays, and even priced out some nice ones on e-bay, but I am concerned about how long it would last before failing... GrantArticle: 112398
I found the cause of the problem. I didn't compile the libraries properly. Its working now. -Sovan.Article: 112399
Grant Stockly wrote: > I really want to build a computer from relays, and even priced out some > nice ones on e-bay, but I am concerned about how long it would last > before failing... If it is clocked with <10 Hz, like the first computer of the world: http://irb.cs.tu-berlin.de/~zuse/Konrad_Zuse/en/Rechner_Z3.html it should last very long with modern relais. -- Frank Buss, fb@frank-buss.de http://www.frank-buss.de, http://www.it4-systems.de
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z