Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hello All, I am using Synplicity Pro 7.5.1 to generate my edif file and then XST tools to do the mapping and PAR etc. For my design, I am using a RAM16X1S i.e. 16-1 RAMs from the Xilinx low-level libraries. ISE 6.2 has this nice feature that reports resource usage i.e. No. of MUXF5s used etc in the MAP report (Could be done elsewhere in earlier versions). However, when I instantiate RAM16X1S, an equivalent number of Dynamic length shift registers are always indicated as used resources. This doesn't seem to make sense because there seems no need for shift registers in my design. Also, looking at FPGA editor indicates no Dynamic shift registers used. I am assuming that SRL16s are used in this case with variable address inputs. (Funny thing is no distributed RAM reported to be used as SRL16s) Is it a bug or am I missing something? Please get back. tEdArticle: 72951
Hi, We are seeing evidence that a DCM is intermittently selecting the wrong tap position after it completes its lock sequence after a DCM reset pulse. I'd like to know if anyone has experienced this effect, and if they were able to resolve this problem. In a 2v6000, I am using a variable phase shift DCM which is driven by a 622 MHz clock (divide-by-2 mode). The DCM generates 311 MHz clocks on its clk0/clk180 output pins. This interface uses IOB DDR regs for a 622 Mhz/16-bit LVDS transmission solution. The DCM initial value is set to 0. After a DCM reset, the DCM is phase shifted to its predetermined optimal "error-free" setting. However, intermittently, the interface experiences a small amount of bit errors. To eliminate the errors, the DCM is further phase shifted in one direction until it actually achieves error-free operation. The subsequent error-free window of operation in this mode matches very closely to the original calibration error-free window. It is as if the DCM locking sequence is corrupted somehow, resulting in a mis-aligned tap position. We don't have conclusive evidence because we can't see inside the DCM to see its tap position. We have found that we can eliminate this error condition by applying another one or two reset pulses to the DCM. I realize that voltage fluctuations and switching noise could be causing this effect. Nonetheless, I'd like to hear from real world experiences. Thank you. JohnArticle: 72952
Hi, your situation reminds me of my own experience about video game design that I had last year. You are doing your third year in computer engineering and i suppose that you have an idea what behavioral, structural and RTL means. First, because of the fact that you are alone doing a project of this complexity and on top of that, having a time constraint will not make things easy. The reasonable thing to do is to plan your time, based most probably on a weekly schedule. When we were doing our nintendo video game, we spent 1/3 of the time searching for the right and reliable information, 1/3 of the time debugging the system and the remaining 1/3 of the time, doing documentation. At the end, it was so complex, that we spent like a couple of weeks not sleeping (or having very little sleep). We were 7 doing the project. OK, 1/3 of the time was spent looking around, mostly on the net for the appropriate information, because for us, nobody had did it before (and we didnt want to plagiate anything!) and also, documention for it was erronous and scarce. It was really challenging to find the right source of information, that could be trusted. Once this was done, the task of writting down CLEAR AND CONSISE specification started. It was decided to break (or partitioned) the system in modules (SOC based on modular design XILINX ). This would be appropriate for a team based approach, but in your case, I guess that you should start writting the specs for the most important part of the computer system; the CPU. CPU is tricky because in 14 weeks, you're gonna need to turn your choice towards an already made and tested free core module. It is tricky because you are going to have to do with specs that have been done by somebody else. OK, here is a clearer explanation. The CPU that you could get on the net would be written so as to "behave operation-wise" like the, say original Z80. It gets very problematic when you have to couple slow memories with those cloned VHDL CPU, cause the timing characteristics (did you take a computer architecture or microprocessor design class??) are not guaranteed to be compatible. Most of the time there is no problem with the interface but that's the weak point in the sysetm at this point. It would be wise to know by heart the difference in timing between the two cpus, (VHDL Off-shelf on the net core, and original Z80). This will save you a lot of time and head-aches! Know what your system consists of, and its weak points and try to find solutions for those. Also, be aware of the fact that if you consider to use on-chip, free-to-use SRAM your system will be synchronous... Our NES on a chip (INES) had exotic clock frequencies, 5.37MHZ, 3.58MHZ, 1.79MHZ, 60HZ. Now building those clock domains in FPGA technologies can be tideous and challenging. You need to find the right balance and solve the "equation" while using limited resources provided by the spartan 3 chip. Up to now, I only talked about gathering information, dividing the work and knowing the limitation. The hardest part is still to come. Once all infos about the system is known, you need to focus on coding the system. As said earlier, divide and conquer is one of the most suitable approach for doing such project. At this point you should have your working CPU and some memory unit. 1) CPU 2) Memory unit --- test the two systems with some basic clock unit From experience, behavioral test benches are good if you have a really fast computer, but again, if you have a demo FPGA spartan 3 board, why not dump the design right away on the hardware. You might want at this time to double check your pin assignment constrains and if you have any clock constrains. CPU and memory are simple to test... some bubble sorting algorigthm might provide you with enough proof that your system is indeed working. The next block that you should focus on is the video system. Write your specifications very clearly. How many colors should it display, what about the resolution. Are you driving a computer/lcd screen or a tv. Tv screens are somewhat easier to work with, because of the fact that it fits naturally the original design. But if you wanna make your design compatible with modern technologies, then you better give this block some attention. Learn what's a sprite, what's a character, how the screen is organized in tiles, where do you use the 5.38MHZ clock (NTSC system), how to generate hsync and vsync. OK, at this point let me warn you. One of the challenge that we had was to couple the video interface of the Nintendo with something like a SVGA screen or LCD panel. The pacman system was designed to be used with 60HZ TV screen. This was so in our case and we did need to use something called a frame buffer... The last block that should be approached is the sound unit, if you intend to provide the system with sound. The sound system would ressemble more something like a programmable square wave. That's not bad to design, from description (again clear specification). Last touchs includes glue logic for memory interfacing and clock generations. Note that the order of the block, starting from CPU->MEMORY->VIDEO->SOUND is important. Its hard to test the video and sound without an appropriate CPU. Again success will come iff you have a good schedule. Dont underestimate your debugging cycle! This is the hardest part of the whole project. Debugging kills and I really advise you to do things methodically. Get yourself a good logic analyzer from the lab (we had a TLA713). Learn how to use it and its nice triggering sequences. And prepare yourself to spend nites infront of it. This message is getting long, so im gonna cut short here. There's still alot to say about designing something of this magnitude but again, the secret is in getting the right time table getting the right information + specs getting the right instruments im pretty much sure ive forgot a couple of other things but if i remember anything, well im just gonna post it here! good luck with "pacman" and make it successful. jaArticle: 72953
Hi, I am new to the world of FPGA. I have started working on my thesis on implementing mpeg-4 codec on fpga's. We have Xilinx Virtex II board from Memec. XC2V1000. Also, included is an optional P160 communications module. My first task is to get the image frames stored on hard disk into the on-board 32M DDR SDRAM. Please give me some clues toward this issue. Thanks, Mayur JoshiArticle: 72954
rickman wrote: > I am testing a VHDL design using embedded CPU program memory which needs > to be initialized. The data to be stored in the RAM comes from an > external source and will be part of the configuration download in the > final system. During simulation, I can't seem to figure out how to > initialize it. You might model the data block as a vhdl constant array of vectors. -- Mike TreselerArticle: 72955
If you read the PCI spec and apply a bit of thought, you'll realize the specs are on the conservative side. This is so that everyone and their dog can implement PCI cards that will work together in harmony and without issue. Yes, the system is set to use reflected wave switching and the ideal wave will ring up on the first reflection; however, with a spec that says the line impedance can be anywhere between 60 and 100 ohms for PCI (compact PCI is much tighter), you will quickly realize there is going to be some ringy-dingies for a couple of round trips. During development of PCI application card, it is common practice to use a PCI extender card. These typically extend the length of the PCI route going from the motherboard to the application card by 3-4 inches. The PCI signals typically look like crap, but they are settled by the time the clock arrives. These come in different varieties, some with ground planes, some without. I've used both with success when a few rules were followed (see below). Using a PCI extender card is similar to what you are trying to accomplish. For the task at hand, i.e. connecting to a PCI card with ribbon, you would be best to limit your cable length to less than 4 inches. As a previous poster mentioned, you really really really really want these ribbons right above a ground plane (or grounded foil) that is tied on the PCI card and the FPGA card (as many points as you can). If your BIOS allows it, slow down your PCI bus clock, which will allow for longer settling time. For best results, put the card in the PCI slot the furthest away from the motherboard chipset. Also, take out any (or all!) unneeded PCI cards from the backplane. Good Luck SM "newgroups" <rprovo@xs4all.nl> wrote in message news:413f5f68$0$37789$e4fe514c@news.xs4all.nl... > Hi, > > If you study the PCI specs, you will see that there is a maximum lenght > specified for the pcb traces (1.5" and 2.5 " for the clock line) . This is > quite important because the signals and switching points rely on the > reflected wave principle of the PCI bus.!!! > > I suggest you obtain the PCI specs. Your io ports on your FPGA must also be > pci compliant and you really should take care of the right timing > constraints to and from the PCI io pads, otherwise you might experience a > lot of problems on the PCI bus. > > > regards > > > ron proveniers > > > > "Ted" <ted@ted.com> schreef in bericht > news:chf9eq$frj$1@newsg2.svr.pol.co.uk... > > Hi, > > > > I would like to make a simple PCI device. I already own an FPGA > evaluation > > board and I also own a blank PCI prototyping card. I was planning on > > connecting the FPGA board to the PCI prototyping card via ribbon cables. > > Would this cause problems with noise? Would it help if I kept the ribbon > > cables quite short. > > > > Thanks for any info, > > > > > >Article: 72956
In article <413db1b1$0$19870$afc38c87@news.optusnet.com.au>, <Patrick Harold> says... > I'm new to VHDL and I want to learn as with examples. > I want to build a 16,24 or 32 bit counter for quadrature encoder signals (ie > A,B signals). > Can someone help me how to create following functionality in VHDL ? > <snip> First of all split up your design into modules with a more specialized task. I would suggest : - A module for doing the conversion of the input signals A,B,IDX into up and down counting pulses for the next stage. - a up-down counter 32/16 - the output stage for parallel output - a parallel/serial converter and its output stage This may be sufficient for learning purposes. In practice, if you really want a working design you have to spend some thoughts about the following : - what happens when you are reading out during a counting pulse from the encoder - what happens if the encoder signals are noisy and changing to fast or counting continuosly up and down - metastability Last of all I would add a clock signal and make it a synchronous design. Best regards -- Klaus Falser Durst Phototechnik AG kfalser@IHATESPAMdurst.itArticle: 72957
Hi, I am designing an AMBA-AHB Master interface.As per the spec ,there is a delayed version of the HMASTER bus is used to control the write data mux.So my doubt is ,whether I should have one clk delayed hwdata from haddr or both can be driven at the same time..It's pretty urgent to make up the decision... ~~Kumar.Article: 72958
Russell Fredrickson wrote: > Okay -- I think you (and others who replied) have missed one of the main > points of SystemC. One of the points of SystemC is to enable you to model > and simulate things at a HIGHER level of abstraction than RTL. If you write > code at the RT level -- it will probably always simulate on the same order > of magnitude whether it's Verilog or SystemC -- in fact since the Verilog > simulators are more mature -- Verilog may simulate faster than SystemC <SNIP> I don't count myself in the "and others who replied" ;-) You hit the main point : Higher level of abstraction (i.e. omitting detailed information) improves performance. You even don't need SystemC for that. It's very well possible in VHDL and probably Verilog too. JosArticle: 72959
Hi, > initialize it. I thought I might use the test bench to read the data > from a file, but I can't figure out how to access the memory since it > does not have an external interface. If I add logic to initialize the Normally simulators provide TCL commands to do this, for instance MTI has: mem load & mem save. NCSIM has memory -read memory -dump I believe VCSMX also has similar command(s). Which simulator do you use? rickman <spamgoeshere4@yahoo.com> wrote in message news:<413F93AF.393AEDF0@yahoo.com>... > memory from the test bench, this will be unused in the real chip and so > > Is there a way to directly access an internal signal or variable from a > test bench? I seem to recall doing this before, but it was a long time > ago and I may be getting a simulator command mixed up with VHDL. Again it is simulator dependent, MTI has SignalSpy, NC - NCMirror etc. Some time ago I wrote a simple package to keep the TB code little independent of simulator by having "probe" commands and converting them to target simulators via a package, please see: http://www.noveldv.com/eda/probe.zip HTH, Aji http://www.noveldv.comArticle: 72960
Patrick Harold wrote: >>>I would like to thank you for your understanding. >>>Unfortunatelly I'm not student. (I'm too old to be a student.. When I >>>was student, I was working with tubes (not even with transistors) ). >>>It is not so easy in my age to keep track with the all this new >>>technologies. I'm trying my best to follow the technology. I recently >>>started to study the VHDL. I'm almost on the "page one" of the VHDL study >>>and want to learn by implementing simple little projects. So did you download the Opencores example?: <http://www.opencores.org/projects.cgi/web/quadraturecount/overview> Paul BurkeArticle: 72961
On Thu, 09 Sep 2004 03:27:06 +0200, "Christian E. Boehme" <boehme@os.inf.tu-dresden.de> wrote: >Christian E. Boehme wrote: > >> The problem arises with the PCI outputs configured as LVTTL TP outputs > >Forget about that ;-) The outputs are configured as PP buffers so >your suggestion makes sense. It still looks a bit hack-ish, though ;) >Chris I would like to reply, but I don't know what you mean by "TP outputs", and "PP buffers". Please stick to standard terms. Philip Philip Freidin FliptronicsArticle: 72962
Hello: I dont think the use of SystemC for RT design is a waste of time. I think of course ,that it main advantage is the possibility of describe the system in a higher abstraction level and to develop the verification environment at this level of abstraction, but for the moment there are not tools that make a synthesis from this high abstraction level. The results is that you have to refine the model and finally write the RT modules of your system. If you want to take advantage of the verification environment you have developed before, you have to write the RT modules in systemC. You will say that you can write them in Verilog and them use a tool that allows simulation of mixed languages, of course, but this tools are really expensive and the code you write is not compatible with the SystemC GPL implementation. We use systemC in that way, we first develop a verification environment and a high level model of the system. Then we refine the model to a RT description in SystemC and we use the same verification environment changing the transactors to verificate the RT model. When the RT model is correct, we translate the RT model to Verilog with a automatic translation tool in ordet to synthetise it. We dont maintain two descriptions since the Verilog is an automatic translation from systemC and believe me, that is works really well. I dont know if you have used SystemC for RT design, we use it extensively and I can say it can be used for RT design without any problem. I dont understand why to use PLI or mixed language simulators since there are a working implementation of syntemC, suitable for RT design, that works fine. If you dont believe me, go to www.opencores.org and download SystemCAES, SystemCDES or SystemCMD5 projects and you will see systemC RT designs with a high level model and a verification environment and its automatic translation to verilog working. Regards Javier Castillo jcastillo@opensocdesign.com www.opensocdesign.com "Russell Fredrickson" <russell_fredrickson@hp.com> wrote in news:chnpum$vq2$1@news.vcd.hp.com: > Okay -- I think you (and others who replied) have missed one of the > main points of SystemC. One of the points of SystemC is to enable you > to model and simulate things at a HIGHER level of abstraction than > RTL. If you write code at the RT level -- it will probably always > simulate on the same order of magnitude whether it's Verilog or > SystemC -- in fact since the Verilog simulators are more mature -- > Verilog may simulate faster than SystemC (though I haven't done the > exact measurements myself and it is simulator dependent). The talk > about SystemC being faster is making the assumption that you write > SystemC at a higher level of abstraction than your RTL. Though, as a > side note, there are several vendors out there who will translate > Verilog to optimized C/C++ or SystemC and then get about a 10x or more > improvement over Verilog (TenisonEDA and Carbon Systems come to mind). > > In my opinion -- writing RTL in SystemC is a waste of time since > Verilog (or VHDL) is more suitable to that task (and maintaining RTL > descriptions in TWO languages seems like even more of a waste of time > and is asking for trouble). In any case you can always have Verilog > and SystemC co-exist by interfacing SystemC to RTL through a PLI or > using one of the unified SystemC/Verilog simulators. > > My point -- adopting a new language without also adopting a new > methodology that makes use of the power of the language will only give > you limited benefit (if any benefit at all). For example, when going > from schematic capture to Verilog -- many people at first used Verilog > just like a textual schematic capture tool. This got them using an > HDL (which is a step in the write direction), but they really didn't > get the full advantage of the HDL until they started writing RTL that > then could then synthesized into gates (basically raising the level of > abstraction at which they modeled their design). > > So for SystemC some of the power of the language comes in being able > to do a top-down implementation where you start with a high-level > architectural model and refine it down to the RTL level (or perhaps > use a behavioral synthesis tool once you get down to a appropriate > level of abstraction). Also the SystemC Verification extensions (SCV) > is another way SystemC can be used to improve your verification effort > (here again -- you will probably need to use a different verification > methodology to make full use of SCVs capabilities). I'll stop there > -- if you look hard enough you should be able to find other references > talking about the new methodologies enabled by SystemC. > > I hope that helps, > Russell > > > > <singh.shailendra@gmail.com> wrote in message > news:ab4d6621.0409072214.6b8ae5a@posting.google.com... >> Hi, >> Can anybody elaborate on the speed of the simulation in systemC in >> comparision with Verilog. In our case we have used the systemC for >> the modeling of RTL design, then verified the systemC RTL models. As >> a final step >> systemC RTL is converted into verilog RTL(line by line translation). >> we are surprised to see the both systemC models and Verilog models >> are running at almost same speed. Can you through some light on it, >> what went wrong in the process? >> is it systemC coding is not proper or may be testbench not written >> properly or >> if we code systemC and Verilog at same level of abstraction we should >> same speed only. > > >Article: 72963
Jacques athow wrote: > CPU->MEMORY->VIDEO->SOUND is important. > > Its hard to test the video and sound without an appropriate CPU. I disagree here. You can get a basic tilemap and sprite system running without a CPU. In fact, the design will build *much* faster without a CPU. We had both tilemap and sprites going before we hooked it up to a CPU. Honestly, I don't think you've got much hope of completing this project single-handedly in 15 weeks, especially if your scope of work is anything near what Jacques has outlined. I would like to note, however, that the NES is reasonably more complex than the early arcade games. In fact, if you want to forego the tilemap/sprite system then go for something like Space Invaders, which is a simple monochrome bitmap. At least as a prototype, you could grab a dev board with VGA output and the opencore Z80 and just get the video going. Even before you worry about getting a CPU in there, I'd suggest you run Space Invaders on the debug build of MAME, halt it mid-game and dump the screen RAM to a file, convert it to intel hex, and load it into an internal (FPGA) memory block. Then work on the video until you can see the correct display (keep a screenshot from MAME at the time of your dump). Once your video works, you can bolt on the CPU with the appropriate memory map and run the SI ROMS - and you should see the game running?!? If you start the other way around, and see nothing, you have no idea whether the fault lies in the CPU, the memory map, or the video!!! If you wanted to go further, you could develop a simple tilemap component - 8x8 and 2bpp should cover most of the early games. Go back to MAME and grab another video RAM dump of, say Pacman, and start again, without CPU. When you get that going, dump your CPU again and do a 16x16, 2bpp sprite system. etc. I've found this the best approach for this type of project. Regards, MarkArticle: 72964
Hi Mukesh, Mukesh wrote: > Before using xpower for my design, I decide to check for a simple > design of fibonacci series. I am facing following issues: > > I am running xpower with vcd generated with post par simulation and > during parsing I encounter the following warnings: > > WARNING:Power:91 - Can't change frequency of net CLK_BUFGP/IBUFG to > 741.84Mhz. > WARNING:Power:91 - Can't change frequency of net CLK_BUFGP to > 741.84Mhz. > WARNING:Power:91 - Can't change frequency of net CLK_BUFGP/IBUFG to > 741.84Mhz. > WARNING:Power:91 - Can't change frequency of net CLK_BUFGP to > 741.84Mhz. > ... > > The frequency for signals in data view shows some values inthe range > of 2-9% in all cases except CLK_BUFGP/IBUFGP and CLK_BUFGP.. Any > attempts to change this value results in power:91 warnings as above. > > The confidence level shows Accurate. I am confused as the report shows > zero power for clock/ logic nets and still the confidence level is > accurate. > The report summary is : > > Total estimated power consumption: 439 > Peak Power consumption: 1081711 > --- > Vccint 1.50V: 65 98 > Vccaux 3.30V: 100 330 > Vcco33 3.30V: 3 11 > --- > Clocks: 0 0 > Inputs: 0 0 > Logic: 0 0 > Outputs: > Vcco33 2 8 > Signals: 0 0 > --- > Quiescent Vccint 1.50V: 65 98 > Quiescent Vccaux 3.30V: 100 330 > Quiescent Vcco33 3.30V: 1 3 > > Whats going wrong here? Anybody encountered similar problems? > Feedback/ help from Xilinx folks please. > > -- > Mukesh This does indeed look similar to another problem which we've been working on. We have a fix for that problem (the one we've been working on) and the fix will be available in the next service pack - 6.3.01i - which should be available to you next week. However, you might be experiencing a diferent symptiom. One option would be for you to zip up the NCD & VCD file and send them to us ? Or are they huge ? The other option is for you to try the service pack next week. Note - in order for you to use 6.3.01i you'll need to have the underlying 6.3i. (From your other e-mail to the newsgroup it appears you are using 6.2.03i.) BrendanArticle: 72965
Hi Mukesh, Mukesh Chugh wrote: > Hi, > > I am facing following issues when using Xpower to calculate the > dynamic power for my design: > > Tools being used: > Xilinx ISE 6.2, Xpower 6.2.03i > ModelSim XE II/ Starter 5.7g > > I am generating the .vcd file during post PAR simulation and then > using this file for xpower along with .ncd and .pcf files. I get a lot > of warnings like: > > WARNING:Power:763 - Only 41% of the design signals toggle. > > WARNING:Power:216 - VCDFile(564214): $dumpoff command encountered, all > simulation data after this will be ignored. > INFO:Power:555 - Estimate is reasonable based on analysis of the > design, user > WARNING:Power:91 - Can't change frequency of net clk to 166.67Mhz. > WARNING:Power:91 - Can't change frequency of net clk to 165.83Mhz. > WARNING:Power:91 - Can't change frequency of net clk_BUFGP/IBUFG to > 165.83Mhz. > WARNING:Power:91 - Can't change frequency of net clk_BUFGP to > 165.83Mhz. > WARNING:Power:91 - Can't change frequency of net GLOBAL_LOGIC0 to > 0.83Mhz. > WARNING:Power:91 - Can't change frequency of net ce_IBUF to 0.83Mhz. > WARNING:Power:91 - Can't change frequency of net clk to 165.83Mhz. > WARNING:Power:91 - Can't change frequency of net clk_BUFGP/IBUFG to > 165.83Mhz. > parsing completed in: 0 secs > WARNING:Power:91 - Can't change frequency of net ce_IBUF to 0.83Mhz. > WARNING:Power:91 - Can't change frequency of net gateway_in4_0_IBUF to > 0.83Mhz. > WARNING:Power:91 - Can't change frequency of net gateway_in4_1_IBUF to > 0.83Mhz. > ...... > WARNING:Power:91 - Can't change frequency of net gateway_in4_8_IBUF to > 0.83Mhz. > WARNING:Power:91 - Can't change frequency of net gateway_in8_IBUF to > 25.83Mhz. > WARNING:Power:91 - Can't change frequency of net gateway_in10_IBUF to > 26.67Mhz. > WARNING:Power:91 - Can't change frequency of net gateway_in9_IBUF to > 25.83Mhz. > WARNING:Power:91 - Can't change frequency of net gateway_in5_0_IBUF to > 1.67Mhz. > WARNING:Power:91 - Can't change frequency of net gateway_in5_1_IBUF to > 0.83Mhz. > ...... > . > WARNING:Power:91 - Can't change frequency of net gateway_in3_3_IBUF to > 26.67Mhz. > WARNING:Power:763 - Only 42% of the design signals toggle. > WARNING:Power:763 - Only 42% of the design signals toggle. > > the report summary is > Power summary: I(mA) P(mW) > ---------------------------------------------------------------- > Total estimated power consumption: 553 > --- > Vccint 1.50V: 146 220 > Vccaux 3.30V: 100 330 > Vcco33 3.30V: 1 3 > --- > Clocks: 0 0 > Inputs: 3 5 > Logic: 61 91 > Outputs: > Vcco33 0 0 > Signals: 17 26 > --- > Quiescent Vccint 1.50V: 65 98 > Quiescent Vccaux 3.30V: 100 330 > Quiescent Vcco33 3.30V: 1 3 > Startup Vccint 1.5V: 200 > Startup Vccaux 3.3V: 100 > Startup Vcco33 3.3V: 50 > --- > > My Questions: > - How come I get clock power zero? In another smaller testdesign of > counters, I do get some power although logic power in that case is > very less. > - The activity rate for clock nets is zero. How? > - I get the correct simulation but the power results seem incorrect. > - Whats the meaningof such warnings? > > -- > Mukesh This too looks similar to the other problem which we've been working on. You could you to zip up the NCD, VCD & PCF files and send them to me. (Or again wait on the service pack next week.) It would also be helpful to know if the messages you've provided occured in the sequence indicated above. BrendanArticle: 72966
Having problem with Xilinx Webpack 6.2i Downloaded and installed it automatically. Looks successful. When trying to get help thru the HELP button and then the "Online Documentation' button, Adobe Reader starts and shortly displays a window saying "There was an error opening this document. The path does not exist". What PATH is it ? What file should be there ? Where should be the file ? Etc .. Please can you help? A. BeaujeanArticle: 72967
"5hinka" <anonim99@poczta.wp.pl> wrote in message news:<chnb7j$8o7$1@nemesis.news.tpi.pl>... > have some small program: > > > use IEEE.STD_LOGIC_1164.ALL; > use IEEE.STD_LOGIC_ARITH.ALL; > use IEEE.STD_LOGIC_UNSIGNED.ALL; Signals below are signed. Consider using STD_LOGIC_SIGNED or NUMERIC_STD packages. > ENTITY dek_zak2 IS > PORT (Zegar,Jesli11,Jesli13: IN std_logic; > Wyjscie : OUT std_logic); > END; > > architecture Behavioral of dek_zak2 is > signal wartosc : integer range -3 to 12; > signal nastepny : integer range -3 to 4; > begin > > dek_zak2 : process (Zegar,Jesli11,Jesli13) > > begin > if Zegar'event and Zegar = '1' then > if wartosc = 11 then > wartosc <= nastepny; > nastepny <= 0; > else wartosc <= wartosc + 1; > end if; > > else > if rising_edge(Jesli11) then "if rising_edge(X)" is (AFAIK) indistinguishable from "if X'event and X='1'" both tell the synthesizer that X is the clock to registered quantities described inside the if clause. If Jesli11 is a synchronous variable common practice would be to hold it in a register if rising_edge(clk) then Jesli11previous <= Jesli11; end if; and then elsewhere if rising_edge(clk) then if Jesli11previous='0' and Jesli11='1' then -- your logic here elsif Jesli13previous='0' and Jesli13='1' then -- your logic here else -- your logic here end if; end if; > if wartosc < 11 then wartosc <= wartosc + 1; > else nastepny <= nastepny + 1; > end if; Especially for beginners I would recommend _always_ defining the else clause for an if clause. Specify what you want nastepny to do when the "if" is true and what you want wartosc to do when the "if" is false. > end if; > Here you have two separate if clauses defining the same variables. This is legal but not recommended. What behavior do you want when both "if"s are true -- rising_edge(Jesli11), rising_edge(Jesli13) -- what behavior when neither is true ? Consider using "elsif" and "case" statements. > if rising_edge(Jesli13) then > if wartosc > 0 then wartosc <= wartosc - 1; > else nastepny <= nastepny - 1; > end if; > end if; > end if; > > end process dek_zak2; > > end architecture; <snip> Hope this helps, -rajeev-Article: 72968
Hi, How memory access time is mentioned in ns (nano seconds)? As for as, i understand Memory read will take few clocks and access time should be calculated based on the Clock period only. So, it can not be Fixed Right? Or it will be like, it is mentioned for MAX clock frequency? Pls clarify. -MuthuArticle: 72969
Hi Mike, I was looking for same thing and found 30-day evaluation on: http://www.embeddedtechnology.com/content/Downloads/SoftwareDesc.asp?DocID={1e91e7bc-edee-11d2-94bd-00a0c9b3bdf2}&VNETCOOKIE=NO I had to register first. Good luck, Borut.Article: 72970
Ted, The RAM16 is the same structure as the SRL16. It is the 16 bit LUT. Can be used three ways: as a LUT, as a RAM, as a SR. You used it. I agree that the report is less than obvious. Austin Ted wrote: > Hello All, > > I am using Synplicity Pro 7.5.1 to generate my edif file and then XST > tools to do the mapping and PAR etc. > > For my design, I am using a RAM16X1S i.e. 16-1 RAMs from the Xilinx > low-level libraries. ISE 6.2 has this nice feature that reports > resource usage i.e. No. of MUXF5s used etc in the MAP report (Could be > done elsewhere in earlier versions). However, when I instantiate > RAM16X1S, an equivalent number of Dynamic length shift registers are > always indicated as used resources. This doesn't seem to make sense > because there seems no need for shift registers in my design. Also, > looking at FPGA editor indicates no Dynamic shift registers used. I am > assuming that SRL16s are used in this case with variable address > inputs. (Funny thing is no distributed RAM reported to be used as > SRL16s) > > Is it a bug or am I missing something? Please get back. > > tEdArticle: 72971
John, There have been cases where the frequency, jitter, and duty cycle are just on the edge of where the DCM phase detector will operate reliably. Check the input duty cycle. It will need to be as close to 50% as you can make it. The spec is 45 to 55%, but at the higher frequencies, it may have to be even closer to 50% when you take clock jitter into account (as if it is 45%, and it has jitter, then it is sometimes less than 45%!). Hope this helps. If you can vary the input clock duty cycle, you should be able to make it always work, never work, and be in between like it is now. That will show you where it needs to be. Duty cycle management is a tough thing as it is affected by signal integrity, and at 311 MHz, signals get distorted very easily. Even observing the signal can be tough, as it doesn't look like it does on the die at the pins (just simulate it to see that). In the past, I have seen cases where the 100 ohm LVDS receive termination is removed, and the problem goes away (due to the location of the 100 ohm resistor, and the stubs causing reflections distorting the signal). The termination for a clock signal input isn't really required (would be for a data signal to prevent ISI). Subsequent versions of the DCM (S3 and V4) have even better phase detectors which are more tolerant of the duty cycle. Always room for improvements. Austin John Cappello wrote: > Hi, > > We are seeing evidence that a DCM is intermittently selecting the > wrong tap position after it completes its lock sequence after a DCM > reset pulse. I'd like to know if anyone has experienced this effect, > and if they were able to resolve this problem. > > In a 2v6000, I am using a variable phase shift DCM which is driven by > a 622 MHz clock (divide-by-2 mode). The DCM generates 311 MHz clocks > on its clk0/clk180 output pins. This interface uses IOB DDR regs for a > 622 Mhz/16-bit LVDS transmission solution. > > The DCM initial value is set to 0. After a DCM reset, the DCM is phase > shifted to its predetermined optimal "error-free" setting. However, > intermittently, the interface experiences a small amount of bit > errors. > > To eliminate the errors, the DCM is further phase shifted in one > direction until it actually achieves error-free operation. The > subsequent error-free window of operation in this mode matches very > closely to the original calibration error-free window. > > It is as if the DCM locking sequence is corrupted somehow, resulting > in a mis-aligned tap position. We don't have conclusive evidence > because we can't see inside the DCM to see its tap position. We have > found that we can eliminate this error condition by applying another > one or two reset pulses to the DCM. > > I realize that voltage fluctuations and switching noise could be > causing this effect. Nonetheless, I'd like to hear from real world > experiences. > > Thank you. > JohnArticle: 72972
I am having trouble getting the JTAG chain to validate with my Windriver Vision probe (using Vison Click software). It is on seperate PPC JTAG I/O: TCK, TDI, TMS, TDO, not using TRST or HALT. This is seperate from the FPGA dedicated JTAG pins. I am using the v2p30 and only one proc. The other proc is not instantiated, the documentation indicates several ways to hook this up. Does anyone have any experience with this? Thanks MarcArticle: 72973
"rickman" <spamgoeshere4@yahoo.com> escribió en el mensaje news:413F93AF.393AEDF0@yahoo.com... > I am testing a VHDL design using embedded CPU program memory which needs > to be initialized. The data to be stored in the RAM comes from an > external source and will be part of the configuration download in the > final system. During simulation, I can't seem to figure out how to > initialize it. I thought I might use the test bench to read the data > from a file, but I can't figure out how to access the memory since it > does not have an external interface. If I add logic to initialize the > memory from the test bench, this will be unused in the real chip and so > I will have a difference between my simulated chip and the real chip. I > prefer not to do that. I have been using an initial value on the memory > variable declaration, but the data changes as I work and this is very > clumsy. > > I also saw an example using a shared variable with one process being the > normal memory model and the other being an init routine. But again, > this will use code that is only part of the simulation and should not be > there for the end device. I guess the code will not be synthesized, but > since it has to be in the target source and not the test bench, it will > either need to be removed or it will likely produce errors in synthesis. > > Is there a way to directly access an internal signal or variable from a > test bench? I seem to recall doing this before, but it was a long time > ago and I may be getting a simulator command mixed up with VHDL. > how about enclosing the initialisation part of the RAM (i guess you use a function that reads a file) between the pragmas "synopsys synthesis_off" and "synopsys synthesis_on"? (if you're initialising thru a function you should avoid synthetising it too in the same way) and as for accessing an internal signal, i know that you can "up" hierarchy by specifiying it's full "pathname", but i dont know if it works "down" hierarchy, you can always try.Article: 72974
Mark McDougall <markm@vl.com.au> wrote in message news:<41401b1f$0$22802$5a62ac22@per-qv1-newsreader-01.iinet.net.au>... > Jacques athow wrote: > > > CPU->MEMORY->VIDEO->SOUND is important. Hi Mark, > > Its hard to test the video and sound without an appropriate CPU. > > I disagree here. You can get a basic tilemap and sprite system running > without a CPU. In fact, the design will build *much* faster without a True that the design and testing cycle will be faster without designing a CPU and interfacing, but later, if you ever consider to run it along side of an actual processor, you might face problem with interfacing. > CPU. We had both tilemap and sprites going before we hooked it up to a CPU. > actually the design of our "ppu" started sometime after having done sound, cpu and memory. But the thing is that we needed a CPU model to test the video unit. It make sense to start with the CPU and have it running. Its not imperative to have it, it would be more fun to start with the video, but watch out for complication with the cpu-video interface sometime futher down the line. That was our mistake and im just stating it here. I dont know how complex the pacman video system is, but the process of designing the 6502 CPU helped us understand and hence design a PPU which was based on a "datapath-controller" approach. This methodology is somewhat general and variations which applies to something more specific is needed, in the case of pacman for example. In our case we also had a sound unit that was hard to test without a CPU. Anyways, I really enjoyed our project and yes, it took us a hell out of time to complete it! I believe that with good planification and tools, you can achieve what you want in the time allocated. Just hurry up and start the project!!! good luck and have fun ja
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z