Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Peter Alfke wrote: > If you implement a binary counter and compare its output against another > binary value, there is no way on this earth to avoid combinatorial > glitches, no matter what technology or circuit design you use. I thought so... :-/ > The > classical solution to this problem is to Gray-code the counter, so that > only one bit changes at a time. Ah... Of course... Only problem is that I guess this requires a higher macro-cell count and my current design is already at the limit (and there are many counters in it :-)). > Of course, you can also re-synchronize the comparator output and thus > suppress all combinatorial glitches... I guess this is the best/"cheapest" way to do it. Thank you for the info. Best regards PrebenArticle: 62526
In article <LEjob.66337$e01.231913@attbi_s02>, Glen Herrmannsfeldt <gah@ugcs.caltech.edu> wrote: >> Uh, sorry peter. Give me a normal bitfile and $200k to write the >> software the first time (overpay me) and I'll be able to back-annotate >> it at least to the placed EDIF netlist, as long as the architecture is >> supported by Jbits. > >Consider that, (from the Xilinx web site) there are Virtex2 devices with >125136 logic cells, and over 8 megabytes of configuration information. A >hex dump of the configuration file, 80 characters per line, 60 lines per >page, would be over 3000 pages long, and not readable by anyone. > >The netlist might be 100 times as long, or 300,000 pages. That is a stack >of paper about 100feet (30m) tall. Now, say you print that out, and maybe >wear out a few printers while doing it. Now you want to change one node. >How long will it take to find that node? > >There are smaller devices, and maybe one could work their way through the >netlist. I think, though, for anything bigger than an XC4002 it would be >hard to get much useful information out of even a netlist. Well, you've already greatly abstracted things once you go through the Jbits model, you really DO have the functional, LUT-mapped netlist when you are done with the pushbutton. What one wants to do AFTER that is up for debate, but it is machine-readable at that point, so one could start to build higher level tools to go from there. Give enough money and reason, and the tools would get written. What my point is is that one doesn't have to reverse engineer the bitstream: Xilinx has done that already. -- Nicholas C. Weaver nweaver@cs.berkeley.eduArticle: 62527
Hi I am new to programmable logic. I got myself a Xilinx CoolRunner II CPLD design kit. I would like to make a project to get into the swing of things. I would like to make a data (Digital Pulse) Recorder. Perhaps it count incoming pulses for a fixed length of time (or set by pc) then store the count and time stamp in the CPLD or FPGA or external memory to later be transferred to the PC Perhaps out a serial rs232 port to the pc. So simply a programmable logic device Maybe some external memory Serial interface to computer. If storing the data is hard to do it could simply count the incoming pulses for one minute, then transfer the data out to a pc and clear the counter for the next cycle. It would require that the pc be connected while acquiring data This should only require a fast input pin and an output in ASCII format i.e. start and stop bits (I can convert it to rs232 with a external max232 chip). Does anyone have any examples (CPLD or FPGA), or web links to tutorial type examples of such a project? MartinArticle: 62528
"Austin Lesea" <Austin.Lesea@xilinx.com> wrote in message news:3FA2AF12.D47B0806@xilinx.com... > Glen, > > http://www.panasonic.com/industrial/battery/oem/images/pdf/Panasonic_LiIon_C harging.pdf > > As I said, just look it up. Probably the one who wants to build the thing should look it up. Though I might want to know why my cell phone battery died. -- glenArticle: 62529
"Nicholas C. Weaver" <nweaver@ribbit.CS.Berkeley.EDU> wrote in message news:bnucoo$2ka9$1@agate.berkeley.edu... > In article <LEjob.66337$e01.231913@attbi_s02>, > Glen Herrmannsfeldt <gah@ugcs.caltech.edu> wrote: > >> Uh, sorry peter. Give me a normal bitfile and $200k to write the > >> software the first time (overpay me) and I'll be able to back-annotate > >> it at least to the placed EDIF netlist, as long as the architecture is > >> supported by Jbits. > > > >Consider that, (from the Xilinx web site) there are Virtex2 devices with > >125136 logic cells, and over 8 megabytes of configuration information. A > >hex dump of the configuration file, 80 characters per line, 60 lines per > >page, would be over 3000 pages long, and not readable by anyone. > > > >The netlist might be 100 times as long, or 300,000 pages. That is a stack > >of paper about 100feet (30m) tall. Now, say you print that out, and maybe > >wear out a few printers while doing it. Now you want to change one node. > >How long will it take to find that node? > > > >There are smaller devices, and maybe one could work their way through the > >netlist. I think, though, for anything bigger than an XC4002 it would be > >hard to get much useful information out of even a netlist. > > Well, you've already greatly abstracted things once you go through the > Jbits model, you really DO have the functional, LUT-mapped netlist > when you are done with the pushbutton. > > What one wants to do AFTER that is up for debate, but it is > machine-readable at that point, so one could start to build higher > level tools to go from there. Give enough money and reason, and the > tools would get written. So how much money to get from 8 megabytes of configuration to commented VHDL code? -- glenArticle: 62530
"Ed J" <jed@privacy.net> wrote in message news:fvyob.18006$YO5.8966216@news3.news.adelphia.net... > I writing VHDL for a Xilinx Virtex-II Pro FPGA application, and software for > its embedded PowerPC. The PowerPC doesn't have hardware floating point, and > floating point emulation in software is too slow for my purpose. So, I'm > looking for a way to add floating point in the FPGA fabric through VHDL or > black boxes. I need IEEE floating point operations such as add, subtract, > multiply, divide. I also need some math functions like sine, cosine, square > root, etc. > > Is there a (free) standard package in VHDL that I can use for this? Third > party support? > > I'm a newbie on a budget, so I'm looking for the cheapest, quickest > solution. I will guess that floating point in an FPGA is more expensive than in a microprocessor. How fast do you need, how fast are integer operations on your PPC, and how fast are software floating point? There is a fair amount of extra work needed for IEEE that isn't necessary for all users. -- glenArticle: 62531
"Bose" <blueforest2@yahoo.com> wrote in message news:a395f2ee.0310301931.6b02246@posting.google.com... > Well, I ran all the programs ( running fc2,design manager ... )and > implementation and all phases were successful.It's only at the last > stage where I have this problem,the chip's memory size is too small to > store my vhdl program. > > By the way, the XC95108 has 108 cells , what are cells ? so the > sc95108 can store 108 flip-flops ? > > Thanks for the help. The numbers in the suffix stand for macrocell count. So, a XC95108 woould have 108 MCs. A MC contains a register, PTA (product term allocator) and a switch matrix. You should read the datasheet for more info. Also a "chip's memory size" does not make any sense. The 9500s do not have any on-chip memory. Maybe you meant that the device is over-utilized?Article: 62532
IEE standard is nice, since it makes the results machine-independent. But it also carries a large overhead in the size of mantissa and exponent, and in all the special cases, like graceful underflow, that do not concern the "normal" user. Many users can live with the "big" step between 1 x 2 exp -128 and real zero... The new FPGAs have fast 18 x 18 twos-complement multipliers that can also be used as shifters, which makes the "bare" floating point implementation less formidable than it used to be. Peter Alfke ================= Glen Herrmannsfeldt wrote: > > "Ed J" <jed@privacy.net> wrote in message > news:fvyob.18006$YO5.8966216@news3.news.adelphia.net... > > I writing VHDL for a Xilinx Virtex-II Pro FPGA application, and software > for > > its embedded PowerPC. The PowerPC doesn't have hardware floating point, > and > > floating point emulation in software is too slow for my purpose. So, I'm > > looking for a way to add floating point in the FPGA fabric through VHDL or > > black boxes. I need IEEE floating point operations such as add, subtract, > > multiply, divide. I also need some math functions like sine, cosine, > square > > root, etc. > > > > Is there a (free) standard package in VHDL that I can use for this? Third > > party support? > > > > I'm a newbie on a budget, so I'm looking for the cheapest, quickest > > solution. > > I will guess that floating point in an FPGA is more expensive than in a > microprocessor. > > How fast do you need, how fast are integer operations on your PPC, and how > fast are software floating point? There is a fair amount of extra work > needed for IEEE that isn't necessary for all users. > > -- glenArticle: 62533
Austin Lesea <Austin.Lesea@xilinx.com> writes: >Well, after 919 equivalent device years of experiment at sea level, >Albuquerque (~5100 feet), and White Mountain Research Center (12,500 >feet) the Rosetta Experiment* on the 3 groups of 100 2V6000s has logged >a grand total of 45 single soft error events, for a grand total of 20.4 >years MTBF (or 5335 FITs -- FITs and MTBF are related by a simple >formula -- mean time between failures vs failures per billion hours or >FITs). But keep in mind that SEUs are random events, unlike other failure mechanisms that depend on cumulative damage, so if one device has an MTBF of 20 years then a system with 20 devices has an MTBF of one year. Most professionals in the radiation effects field don't use MTBF as a measure of SEU immunity, they use errors/bit-day or a similar metric. >So that means that a 2V6000 at sea level gets a logic disturbing >hit once every 200 years. And that if you have 200,000 in the field at sea level then 2 or 3 are getting a logic disturbing hit EVERY DAY. Or if you have critical mission that lasts for five years then your chance of getting a logic disturbing upset is one in forty. OK for a PC running Windows, perhaps, but if you are building warheads.... >You know, if you want to use FITs, we'll use FITs. But I am afraid it >will give those spreading nonsense fits (pun intended). Again, FITs is not a good metric. These aren't "failure in time", they are random events. An SEU can happen in the first millisecond of operation or after 200 years of operation. >Some of our customers have now qualified Virtex II Pro as the ONLY >solution to the soft error problem, as ASICs can't solve it (easily like >we have), Now that's just misinformation. We've put a number of ASICs in space, and in worse environments than the surface of Mars. How did Galileo survive the sulfur ions around Jupiter for ten years without your products? Can you tell us what the penalty in area and speed would be in going to TMR? And exactly which of your products have sufficient resistance to total ionizing dose to be considered for space applications...do your current state-of-the-art products fit in this category? >And, you may want to consider going with the vendor who has been >actively working on soft error mitigation for more than five years now. I've been in this business for twenty years, on both the military and civilian side. I've designed full custom, ASIC and FPGA products for a variety of space applications. Methinks the lady doth protest too much... Joe -- K. Joseph Hass Center for Advanced Microelectronics & Biomolecular Research 721 Lochsa St., Suite 8 Post Falls, ID 83854Article: 62534
If I could get 100 nanosecond performance for a floating point multiply or divide, I think that would be good enough. The math that my application needs to perform isn't quite nailed down yet, but it involves signal processing that will need complex multiplication, complex division, sine, cosine, square root, etc. The performance goal of the system is to make measurements at a 100 MHz rate, but it can be slower if cost becomes prohibitive. The PowerPC is rated for about 300 MHz, but the development board I am currently using is running at only 100 MHz, and it takes a single cycle (10 nanoseconds) for an integer operation. At 100 MHz, an emulated floating point addition operation takes about 48 microseconds, which is way too slow to meet my performance specs. Are there any high performance "floating point coprocessors" out there like were used. The production quantity for our system will be very low, and cannot justify a large expense for third-party IP, so we are willing to consider an extra chip or two to do special work. For example, rather than invest in Ethernet IP core for our FPGA, we mated it to a commercially-available Ethernet MAC/PHY chip, and saved a lot on IP costs, NRE, and FPGA internal real-estate. "Glen Herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message news:bUzob.71254$HS4.627308@attbi_s01... > > "Ed J" <jed@privacy.net> wrote in message > news:fvyob.18006$YO5.8966216@news3.news.adelphia.net... > > I writing VHDL for a Xilinx Virtex-II Pro FPGA application, and software > for > > its embedded PowerPC. The PowerPC doesn't have hardware floating point, > and > > floating point emulation in software is too slow for my purpose. So, I'm > > looking for a way to add floating point in the FPGA fabric through VHDL or > > black boxes. I need IEEE floating point operations such as add, subtract, > > multiply, divide. I also need some math functions like sine, cosine, > square > > root, etc. > > > > Is there a (free) standard package in VHDL that I can use for this? Third > > party support? > > > > I'm a newbie on a budget, so I'm looking for the cheapest, quickest > > solution. > > I will guess that floating point in an FPGA is more expensive than in a > microprocessor. > > How fast do you need, how fast are integer operations on your PPC, and how > fast are software floating point? There is a fair amount of extra work > needed for IEEE that isn't necessary for all users. > > -- glen > >Article: 62535
Followup to: <3fa24caf$0$12666$fa0fcedb@lovejoy.zen.co.uk> By author: "Nial Stewart" <nial@spamno.nialstewart.co.uk> In newsgroup: comp.arch.fpga > > Don't phone batterys degrade fairly quickly if they're > constantly trickle charged rather than being fully > discharged then re-charged? Are the characteristics > of your batterys different? > The much-hyped "memory effect" of certain battery technologies have actually been disproven in quite a few tests -- what it really is all about is overcharging. Some chargers would simply give full power for X amount of time, and if the battery wasn't fully discharged at the start of the cycle it would get overcharged. Overcharging causes massive heat dissipation inside the battery, and can cause the battery matrix to crack, thus causing the battery to degrade. -hpa -- <hpa@transmeta.com> at work, <hpa@zytor.com> in private! If you send me mail in HTML format I will assume it's spam. "Unix gives you enough rope to shoot yourself in the foot." Architectures needed: ia64 m68k mips64 ppc ppc64 s390 s390x sh v850 x86-64Article: 62536
Let me just address the relatively simple subject of FITs vs MTBF. 100 FITs means an MTBF = 10 million years. But nobody I know would be silly enough to interpret this to mean that each circuit lives that long and then suddenly dies. We all assume a statistically even distribution ( with different parameters descibing infant mortality). That's why we laughed when Actel (in the original press quote) made sucha big issue about the difference: "Actel, currently the only anti-fuse FPGA maker, refuted this suggestion, pointing out that Xilinx's use of mean time between failures (MTBF) is the wrong metric to measure error rates: "MTBF is the wrong statistic, because a neutron event is random," said Brian Cronquist, senior director of technology at Actel." I sent him an e-mail suggesting for us to disagree on more relevant things. No answer. Seems like they don't have a more meaningful rebuttal. Enough said. Obviously the Xilinx large scale "Rosetta" test results have given the antifuse community fits ( pun intended). They should. That is not to say that we are perfect, or that we have the only viable solution. But antifuses have lost their (high-priced, small size) monopoly. And fresh blood and competition is always healthy, even in aerospace ! Peter AlfkeArticle: 62537
Joe, Thanks for giving me the opportunity to reply. I thought no one cared to comment. See below. Austin Joe Hass wrote: > Austin Lesea <Austin.Lesea@xilinx.com> writes: > >Well, after 919 equivalent device years of experiment at sea level, > >Albuquerque (~5100 feet), and White Mountain Research Center (12,500 > >feet) the Rosetta Experiment* on the 3 groups of 100 2V6000s has logged > >a grand total of 45 single soft error events, for a grand total of 20.4 > >years MTBF (or 5335 FITs -- FITs and MTBF are related by a simple > >formula -- mean time between failures vs failures per billion hours or > >FITs). > > But keep in mind that SEUs are random events, unlike other failure > mechanisms that depend on cumulative damage, so if one device has an > MTBF of 20 years then a system with 20 devices has an MTBF of one year. > Most professionals in the radiation effects field don't use MTBF as > a measure of SEU immunity, they use errors/bit-day or a similar metric. So, the device has 20 million bits. Do the math. I have stated all the arguments. You like cross section? bit errors/time? Just poke the buttons on your calculator. It is all statistics (even MTBF or FITs).. Soft errors are no different from any other failure: they are random! > > > >So that means that a 2V6000 at sea level gets a logic disturbing > >hit once every 200 years. > > And that if you have 200,000 in the field at sea level then 2 or 3 are > getting a logic disturbing hit EVERY DAY. Or if you have critical mission > that lasts for five years then your chance of getting a logic disturbing > upset is one in forty. OK for a PC running Windows, perhaps, but if you > are building warheads.... Yes! And if I had 200 million of them, I would be getting an error every millisecond! Oh my! Help! Oh s**t! Give me a break. This is standard 5 o'clock news hype: just make it sound as bad as possible. Fact: each unit will still fail only once every 200 years. If you are fortunate enough to have sold a million units, then you should also be smart enough to use the necessary design techniques to mitigate being put out business by the more dominant failure rate of the hardware in the system itself. Soft errors are a small part of the overall system reliability calaculation you must perform. That is my point here. > > > >You know, if you want to use FITs, we'll use FITs. But I am afraid it > >will give those spreading nonsense fits (pun intended). > > Again, FITs is not a good metric. These aren't "failure in time", they > are random events. An SEU can happen in the first millisecond of operation > or after 200 years of operation. Oh yes, and it happened right now! Oh my! Stop it. Give it up. You can only scare people who are ignorant of real world effects. > > > >Some of our customers have now qualified Virtex II Pro as the ONLY > >solution to the soft error problem, as ASICs can't solve it (easily like > >we have), > > Now that's just misinformation. We've put a number of ASICs in space, > and in worse environments than the surface of Mars. How did Galileo > survive the sulfur ions around Jupiter for ten years without your products? I don't know? Did it use 90nm technology? Nope. > > > Can you tell us what the penalty in area and speed would be in going > to TMR? None. Uses up 3X+ logic though. > And exactly which of your products have sufficient resistance > to total ionizing dose to be considered for space applications...do your > current state-of-the-art products fit in this category? Yes. We have rad hard FPGAs for total ionizing doses. Look it up in the Q-Pro line on the web. The devices are immune to SEL, too. ASICs and standard parts are having problems with SEL now. Didn't you know that? Haven't been reading your LANSCE test updates, huh? > > > >And, you may want to consider going with the vendor who has been > >actively working on soft error mitigation for more than five years now. > > I've been in this business for twenty years, on both the military and > civilian side. I've designed full custom, ASIC and FPGA products for > a variety of space applications. Good, then you should welcome all the work we are doing, and the progress we are making. And you should recognize4 the FUD that is being spread about by others who are not only ignorant of what is going on, but have no other intent than to save their own skins by spreading as much false information as possible. > > > Methinks the lady doth protest too much... All the world's a stage..... > > > Joe > -- > K. Joseph Hass > Center for Advanced Microelectronics & Biomolecular Research > 721 Lochsa St., Suite 8 Post Falls, ID 83854Article: 62538
I read an article in "Scientific American" about how much information can be compressed into a certain volume, and apparently all objects have a Shannon entropy in addition to the thermodynamic entropy. Also, black holes have a Shannon entropy that is based on the surface area of the event horizon. I was totally lost. Can anybody else explain how Shannon's information theory applies to black holes? -KevinArticle: 62539
"Kevin Neilson" <kevin_neilson@removethiscomcast.net> wrote in message news:ZzCob.71991$Fm2.57178@attbi_s04... > Can anybody else explain how Shannon's information > theory applies to black holes? Yes. Hawking can.Article: 62540
Hi Austin, Out of interest, how many of the 300 parts in your experiment broke permanently? Any at all? If there were any 'hard' failures, did altitude affect this statistic, or were these failures due to other mechanisms? Syms. "Austin Lesea" <Austin.Lesea@xilinx.com> wrote in message news:3FA0113E.BDD340C1@xilinx.com... > Hello from the SEU Desk: > > Peter defended us rather well, but how can one seriously question real > data vs. babble and drivel? > > Well, after 919 equivalent device years of experiment at sea level, > Albuquerque (~5100 feet), and White Mountain Research Center (12,500 > feet) the Rosetta Experiment* on the 3 groups of 100 2V6000s has logged > a grand total of 45 single soft error events, for a grand total of 20.4 > years MTBF (or 5335 FITs -- FITs and MTBF are related by a simple > formula -- mean time between failures vs failures per billion hours or > FITs). > >Article: 62541
Kevin Neilson wrote: > Can anybody else explain how Shannon's information > theory applies to black holes? Sounds like just the place for a First In Never Out (FINO) transmit buffer. -- Mike TreselerArticle: 62542
Kevin, Really quite easy. Just read http://www.mdpi.org/entropy/papers/e3010012.pdf Now after you have read it, go get a stiff drink ....and then fall into a troubled sleep. As you toss and turn having nightmares about information horizons, and gravity strings, remember what the White Rabbit said: "feed your hair." How many bits can fit on the surface of a black hole? (2003) How many Angel's can fit on the head of a pin? (1536) A question for every age. Austin Kevin Neilson wrote: > I read an article in "Scientific American" about how much information can be > compressed into a certain volume, and apparently all objects have a Shannon > entropy in addition to the thermodynamic entropy. Also, black holes have a > Shannon entropy that is based on the surface area of the event horizon. I > was totally lost. Can anybody else explain how Shannon's information > theory applies to black holes? > -KevinArticle: 62543
Mike Treseler wrote: > Consider synchronizing the interface to a faster fpga clock and generate > your own synchronous read and write strobes in just the right places. The reason why I started with the idea of using the HC11 microcontroller's E-clock to clock the wishbone interface was I imagined it would bring simplisity, as I wouldn't need to deal with the interfacing between the two different clock domains (i.e. the CPU's crystal derived clock and the FPGAs clock). In my circuit I don't have to worry about the HC11 going to a low power state (where the e-clock signal would stop) as I'm not intending to use such features. Having said that it appears that things might be more flexiable if I decouple that requirement, use a faster clock within the FPGA and deal with the fact that each bus is now asychronous to each other. >I don't know the interfaces, but > WB_STB_O <= cs; > is suspect. > > Normally cs lasts for multiple ticks, > the write strobe is one tick, somewhere in the middle. Ok, well if I'm reading the HC11's waveforms correctly atleast on that bus it's slightly different. It has a single R/W* signal. A logic zero on this signal indicates that the present bus cycle is a write operation and it can be held low for consecutive bus cycles in cases where double-bytes are being written. The R/W* is speced as being valid whenever the ADDRESS bus contains a valid address (which is almost the entire bus cycle), as such it's more a "write request" rather than a "write strobe". With the programmable chip-selects there is a choice on when they are asserted. Programming a register with the correct value will mean that the chipselect will be asserted as soon as the address is placed upon the bus. Changing that value can change the length of the chipselect strobe to make it only occur for the second half of the bus cycle (when e-clock is high meaning device should place data onto bus). I know 100% that I'm not understanding basic bus interfacing at the moment. I entered this project thinking (for the HC11 at least) that I could basically be on the look out for a particular clock edge and then simply look at the Read/Write signal and the chipselect to see if the transaction was meant for me... And I think it's almost like that.. for example looking at the HC11 datasheet indicates that on the rising edge of it's e-clock (a 4th of it's crystal oscillator speed) the read/write strobe, the address and the chipselect (if programmed correctly) will all be valid and hence I could latch them into the FPGA... But that's where I get stuck and my knowledge starts to run out. For a read operation (HC11 reading a register from the FPGA) I think I'm fine... I can detect the signals such as the read/write strobe on the rising edge of the clock and output my data, then as soon as the eclock goes low I can tristate the bus again (and this makes sense as the HC11 reference manual has the following sentance in it.. "The E-clock can be used to enable external devices to drive data onto the data bus during the second half of the bus cycle (E clock high)". So I could have something like hc11_data <= wb_data_o when (hc11_eclk = '1') else "ZZZZZZZZ"; where hc11_eclk is effectivtly the oe signal mentioned in another poster. However the waveforms where the HC11 is writing to a register within the FPGA is more confusing for me... When the E-clock goes high the data on the databus isn't valid yet, however it's guarenteed to be valid for atleast 31ns after the e-clock's falling edge. So it appears I can latch the address and read/write type signals on the rising edge of e-clock (and place my data on the bus in case of a read operation), but I have to wait until the falling edge before I can capture the data the HC11 has presented for a write operation. So does that mean I need to clock my module on both edges of the e-clock signal? I thought that's discouraged as much as possible within FPGA designs? Or is this just pointing out how much more horibbly confused I have became? > Looks reasonable for a first cut. > You have to compare your sim waveforms to the H11 and wishbone data > sheets. Consider making an oe signal to drive data Z between cycles. Reading the various replies to my initial posting and looking at the various datasheets etc has made me appreciate excatly how little I actually know about this... or even digitial logic in general when it comes to sequential designs. So at least in the mean time I've decided to concentrate on an even smaller subcomponent of the desired goal. Instead of attempting to simulate the entire HC11 to Wishbone bus interface I'm going to concentrate on getting a simple HC11 bus interface designed and simulated properly. I.e. attempting to basically simulate a 74HC series 8bit latch hanging off the HC11's databus... something I can do well with "real" hardware :-) I think at the moment I have enough issues with respect to properly reading the waveform timing diagrams in the HC11 reference manual to think about a the complete design. Especially when you throw in considerations such as how I'm going to deal with issues such as a slave wanting to extend a bus cycle.. something which isn't as simple as asserting a signal on the HC11's bus.. Once I get that far, then I can start worrying about interpreting the waveforms in the wishbone spec and dealing with the "translation" of signals between the two busses. > > PS: Can anyone suggest a good book on Simulation, in particular with > > relation to VHDL? I've identified this as an area where I'm particular > > weak, and as an area where improvements would make progress a bit > > smoother. > > You already know how to run the simulator,so get a copy of > Ashenden's guide to vhdl as a language reference, and get busy. > Consider adopting a synchronous testbench style: > http://groups.google.com/groups?q=oe_demo Thank you for this reference. The example testbench in that thread was sort of what I was aiming for when I talked about reading stimulus from a file. At present my test bench is physically hardcoded to perform the individual bus cycles, i.e. I have something along the lines of cs <= '0'; wait for ABCns; we <= '1'; wait for XYZns; ... blah blah blah ... And I wanted a more "automated" way of doing it. Using a similiar techinque to that used in the testbench in that thread, and using an array containing the list of desired bus operations and iterating over it could just be the ticket I'm looking for... Thanks for the help, Christopher FairbairnArticle: 62544
"Austin Lesea" <Austin.Lesea@xilinx.com> wrote in message news:3FA3000A.45C10F96@xilinx.com... >> > How many bits can fit on the surface of a black hole? (2003) Actually the entropy of a black hole works out to be about 10^66 bits/ cm^2. ClayArticle: 62545
The map report is the easiest place to verify that you've gotten your pullups. It has a section like this. Look under Resistor. +--------------------------------------------------------------------------- ---------------------------------------------+ | IOB Name | Type | Direction | IO Standard | Drive | Slew | Reg (s) | Resistor | IOB | | | | | | Strength | Rate | | | Delay | +--------------------------------------------------------------------------- ---------------------------------------------+ | diff_clk_out | DIFFM | OUTPUT | LVPECL_33 | | | OUTDDR | | | | diffclk_in | DIFFM | INPUT | LVDS_33 | | | | | | | clk_in | IOB | INPUT | LVTTL | | | | | | | led<0> | IOB | OUTPUT | LVTTL | 12 | SLOW | | | | | led<1> | IOB | OUTPUT | LVTTL | 12 | SLOW | | | | | led_b<0> | IOB | OUTPUT | LVTTL | 12 | SLOW | | | | | led_b<1> | IOB | OUTPUT | LVTTL | 12 | SLOW | | | | | push1 | IOB | INPUT | LVTTL | | | | PULLUP | | | reset_n | IOB | INPUT | LVTTL | | | | | | | uled | IOB | OUTPUT | LVTTL | 12 | SLOW | | | | +--------------------------------------------------------------------------- -------------------------------------------- "Max" <cialdi@firenze.net> wrote in message news:8e077568.0309252330.21aae0c5@posting.google.com... > I use xilinx ise webpack 6.1 sp1. > In may project I tried to add contrains like: > > NET "probes<0><0>" LOC = "D11" | PULLUP ; > NET "probes<0><1>" LOC = "D12" | PULLUP ; > NET "probes<0><2>" LOC = "C12" | PULLUP ; > > This signals are all input. > > In translate report is reported: > > Attached a PULLUP primitive to pad net probes<0><2> > Attached a PULLUP primitive to pad net probes<0><1> > Attached a PULLUP primitive to pad net probes<0><0> > > > But in place&route report there is no reference to pullups: > Resolved that IOB <probes<0><0>> must be placed at site D11. > Resolved that IOB <probes<0><1>> must be placed at site D12. > Resolved that IOB <probes<0><2>> must be placed at site C12. > > Even in pad report is not mentioned pullup resistor for that signals. > > How can I be sure about the presence of pullup resistors on that ports? > > thanksArticle: 62546
Henk Thanks for the notice. Is there any chance you will have a C compiler available for picoblaze? "Henk van Kampen" <henk@mediatronix.com> wrote in message news:23ecd97d.0310240736.7b08c8dd@posting.google.com... > Recently I have updated my Picoblaze (tm Xilinx) development tool > pBlazIDE and added some documentation. Additionally I have published > some example code and demonstration files. Please feel free to check > this out. Its all freeware. Check under 'Tools'. > > Regards, > Henk van Kampen > www.mediatronix.comArticle: 62547
Hi, Andras Tantos wrote: > - There's no wait-state generation. You don't detect any wait-state > requests from the WB side and don't generate wait-states for your async > master (HC11). That can cause problems if you communicate with slow > devices (for example with a FIFO which, when full, generated waits) At the moment the three initial wishbone slaves I'm intending to interface to are all pretty primitive and have their ACK_O simply tied to their STB_I input as allowed by the wishbone spec if they don't require any waitstates. As such I shouldn't need wait states, but it is something I've been concious about and keeping in the back of my mind... knowing full well that murphie's law will present a nice wishbone slave that I desire to use at some stage in the future which will require waitstates... I'm just not excatly sure how to deal with it, considering I can't stall the HC11 bus. > I'll paste my circuit here for reference. Note, that it does not use the > CPUs clock to sync up the WB part, so it can run on much higher clock > speeds (in my case 70MHz). That can help meet the timing (with proper > wait-state handling of course) but can cause all kinds of meta-stability > issues, I'm not sure I've addressed properly either. Please note that I'm > not a professional either, I'm not claiming my design to be nice or > flowless. It at least worked in a real HW... Any comment are welcome. As part of my research and investigations today I discovered an asynchronous wishbone master on www.opencores.org which appears to be developed by yourself (http://www.opencores.org/cores/wb_tk/wb_async_master.shtml). Is this correct? It appears that the version on the opencores website is a lot simplier than the version you presented in your posting. Reading through the source for the one on opencores.org has cleared a lot of things up for me and "turned a couple of lightbulbs on" in my mind.. What's the main differences between the two? I'm having difficulty following the one in the newsgroup posting while I can pretty much follow the one on www.opencores.org. The testbench support code in the http://www.opencores.org/cores/wb_tk project has also helped me out. The use of a VHDL function to wrap up the inner workings of a bus cycle should stop me from duplicating all those lines of code in my testbench for every read/write I perform on the bus... Thanks, Christopher Fairbairn.Article: 62548
Hi, Does anyone know how I can convert my verilog code to VHDL? StanleyArticle: 62549
Hi, Has anyone used or written any WinCE driver for this card? (Wildcard from Annapolis Micro System www.annapmicro.com) Where can I possibly find one? Any information would be greatly appretiated. Stanley
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z