Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Yeah, it's been quite puzzling - I had the local Xilinx FAE out here the other day and we weren't able to get anywhere on it (besides agreeing that the PLB and OCM busses look like they should). So here are the answers to your questions, followed by the more recent developments: 1) We are just running out of flat memory space in priviledged mode. 2) We have a Green Hills probe to debug - I haven't tried using the XMD program. The boot code is C. Code is copied from FLASH into SDRAM and using the Green Hills probe, I am able to verify the contents of SDRAM. 3) When I stop the code the first time, it goes to 0x2004 and I can still read/write reigsters. When I do a step or press run again, I get something like a "Processor not stopped after single step" and then I cannot read/write any register from the green hills probe command line. From the green hills multi debugger, I get "timeout waiting for cre to stop in read of GPR 30. Single Step Failed." 4) Yeah, the link register increments by 4, the PC/CTR registers are correct and then register 5 is copied into register 6. I don't know anything about how interrupts work on the PPC, so I'll have to read up on that. So the FAE suggested trying to break the problem down to eliminate the boot code. To that end, I have taken a short program that prints "entering code()" and "exiting code()" and then stops. I boot from this code and using the debugger reset. So at this point, none of the registers or memory have been setup. I use the debugger to load SDRAM 0x2000-0x200C (the first four instructions) and program the PC to 0x2000. If I load the design that uses the PLB RAM instead of the OCM RAM and load the registers from the last example (none are set up) and locations 0x2000-0x200C in SDRAM, I am able to step through all of these instructions. Thus, it seems like nothing is wrong with the software, with the exception of not setting up a register value to enable the device to switch from OCM to SDRAM. The fact that it boots correctly out of IOCM and jumps to SDRAM at all, seems to indicate that the hardware logic is in place. I hope that helps clarify what we've got in place and points out what I'm missing. Thanks, -CharlesArticle: 101226
I suggest you derive the 200 Hz from the 100 MHz with a synchronous counter, which means you divide by 500 000. That takes 19 flip-flops, and there are various ways to achieve exactly 500 000, either through BCD stages, of by loading the binary counter with the appropriate value whenever it rolls over. You end up with ~20 LUTs and flip-flops, and your signal is synchronous with the original 100 MHz, so you have no clock-crossing problems. I think this is the only meaningful and efficient way to address the problem. You do not want to cross asynchronous clock boundaries, if you can somehow avoid that. Peter AlfkeArticle: 101227
"Peter Alfke" <peter@xilinx.com> wrote in message news:1146164533.757065.313390@j33g2000cwa.googlegroups.com... >I suggest you derive the 200 Hz from the 100 MHz with a synchronous > counter, which means you divide by 500 000. > That takes 19 flip-flops, and there are various ways to achieve exactly > 500 000, either through BCD stages, of by loading the binary counter > with the appropriate value whenever it rolls over. You end up with ~20 > LUTs and flip-flops, and your signal is synchronous with the original > 100 MHz, so you have no clock-crossing problems. I think this is the > only meaningful and efficient way to address the problem. > You do not want to cross asynchronous clock boundaries, if you can > somehow avoid that. > Peter Alfke But Fizzy still has the issue of skew between the *synchronous* domains that needs to be detailed, right? It certainly wouldn't be safe to have a 100 MHz register directly feed a 200 Hz register although the return trip *might* be a simple path if the total 200 Hz clock skew is well under the 10 ns master clock period. (For others' benefit - Peter knows these tricks:) My own favorite divider is one that always subtracts 1 from either the counter value or 249999 when the counter is -1. The resulting count is 249998 to -1, inclusive, for an effective divide-by-250k. The MSbit pulse indicating -1 just feeds a toggle flop and the 200 Hz, 50% duty cycle clock is there.Article: 101228
Roger Bourne wrote: >Hello all, > >I always wondered the following: > >How are constants implemented in an FPGA ? How many can be stored >without causing bottlenecks (routing issues)? >A quick scan of a Spartan3 indicated there is no ROM. > > Actually, the LUTs of all FPGAs **ARE** ROMs! They are very SMALL ROMs, but they are a ROM until you reconfigure the FPGA differently. JonArticle: 101229
pavithra.eswaran@gmail.com wrote: > Hi, > We would like to drive LEDs using Ramp Waveforms using Xilinx CPLDs. > Is it possible to write a driver and control the LED directly from CPLD > or should an external D/A Converter be used? Depends on the precision, and if you need linear current, or can tolerate an average value. Most common for human-view applications is PWM or PDM to drive the LED, but that can give EMC issues, if you have large LED currents. If you want to avoid the EMC, you can do PWM/PDM to a DC voltage, and add a linear LED driver (now you have thermal issues :) -jgArticle: 101230
rickman wrote: > I am asking that you not wait until a frustrated user reports your > mistakes. I am asking that you have someone look at what it takes to > find all the info that an engineer needs to configure these parts and > make the info consistent and more usable. Documentation by the designer is notoriously awful, but to start with, the designer is who is available. It takes a truly wonderful designer to put himself in the position of someone encountering his design for the first time. Just ask the people trying to use my documentation. Input from frustrated users is necessary. I don't have the information to know whether the documentation for this part is as good as it could be given the user input.Article: 101231
Francesco wrote: >>Have you also looked at the quite similar ( but open sourced ) >>Lattice Mico8, > > > Yes I had a look at Lattice Mico8, > but to be honest at work I use only Xilinx... FWIR, when I looked at them both, the Mico8 lacked the Return with Flag option, and the Mico8 looked to have an easier way to extend the JMP/CALL, to allow larger code. [which will matter to C users :) ] - ie when doing a compiler, look at allowing different call sizes, as legal/error, and some simple means to modify the opcode generation to match that detail of the core. >>and also the PacoBlaze ? > > > I think Pacoblaze is compatible with picoblaze, > so you can use the compiler with pacoblaze in any FPGA. I think there are some (optional) extensions in PacoBlaze, that could make sense to support in a compiler. -jgArticle: 101232
fpga_toys@yahoo.com writes: > We have modems today that broke the modulation "laws" set in 60's ... > by more than an order of magnitude. What "laws" are those? AFAIK, today's modems are still subject to the Shannon-Hartley Theorem and the Nyquist Sampling Theorem, which substantially predate the 1960s. Perhaps these putative "laws" from the 1960s were promulgated by people with little understanding of information theory?Article: 101233
vssumesh wrote: > The xilinx template for V4 block ram with one read/write and one > read port is not synthesizing correctly in the synplify. I would start with the synplify template. -- Mike TreselerArticle: 101234
St=E9phane- > My question is : Is the [Xilinx PCI core] able to transparently handle th= e fact that the > board is plugged in a 32 or 64 bit PCI slot ? Do you see see REQ64 and ACK64 or similar signals on the module interface? -JeffArticle: 101235
Hi, I've got a little knowledge about fpga's. I've used for fun spartan2 at school. I'm interested in embedded systems in view of operating systems. Which development platform (starter kit) do you suggest for such a person like me ? My expectations are: * low costs, * "helpfull hand" - projects from which I can learn, gain knowledge, documentation (I haven't met ppl yet here who would help me with it - that's why it's so important for me as it gonna be my very begging with this stuph), * included good development software (with cores which would allow me to start to play), * features of platform (ethernet, usb, video, audio, connectors etc). The nice thing would be to have PCI. Either edge or slot. XUP Virtex-2 Development Platform (http://www.digilentinc.com/) - it cought my attention. Are there better platforms ? Thanks.Article: 101236
OK, Xilinx was uniquely unhelpful this time, so I resort to this list. My setup is a SystemACE connected to 1 or 2 Virtex2 FPGAs, and also to a PPC405GPr running Linux. The second FPGA is optional, and when the optional FPGA is installed, the JTAG is rerouted through and a different ACE file supplied. The CF card contains a second partition where the Linux fs (ext3) lives. My problem is that when the second fpga is installed, the board will crash Linux within a few minutes. The SystemACE driver is getting an error that the JTAG configurator was unable to read the configuration stream from the CF. (This is 1/2 nonsense, because the chips were programed by the SystemACE under the watchful eye of u-boot before Linux was even started.) The second FPGA is getting programmed (during u-boot) because the PPC is able to discover it's PCI id at boot time. The CFGADDR[2:0] bits are unconnected on our board, as are the CFGMODEPIN, POR_BYPASS and POR_RESET. The CFGPROG goes to both FPGA devices (with a single pullup.) The CFGINIT is connected to only the first FPGA. So does anybody have any clue why in blue blazes the SystemACE is going nuts when the second FPGA is installed? - -- Steve Williams "The woods are lovely, dark and deep. steve at icarus.com But I have promises to keep, http://www.icarus.com and lines to code before I sleep, http://www.picturel.com And lines to code before I sleep."Article: 101237
to McGettigan: thank you for your remind. I use the VII Pro xc2vp20 device. There are 8 GTs, and I used 4 of them. The protocol is Custom. The clk is 155MHz, the data width is 2 bytes, So these 4 bonded channels could process 10Gb/s data flow in theory. Unfortunately, however, it could only work at 1.5Gb/s in field. At this(1.5Gb/s) time, all the things looks right: The FSM on both TX and RX sides transformed well,no invalid state, no invalid transmition. And the Rocket IO works well: no rxlossofsync, no rxnotintable... RX can receive the 4 SOPs at the same time(channel bonding ok). As soon as the flow rate increase to 2Gb/s and upper, The RX side could not receive the 4 SOPs simultaneously, which is one of the transmition conditions of my RX FSM, so the FSM will stop. And I know that it lossed channel bonding. In order to overcome this error, I send CBS once again as soon as sending 65535 data packets at TX side. CBS would let the RX FSM continue to move, but it looks wierd. There are some invalid transmition happend, And some error signals of Rocket IO appears, such as rxlossofsync, rxnotintable. That is my problem. Many thanks! kingArticle: 101238
For the price, if you are a student, the XUP is a very nice board. The problem however is that it has a XC2VP30. So webpack doesn't work with it?? I think that is correct, I have not verified it. But anyway, think about the price of ISE and EDK as well. I personally do like the XUP though.Article: 101239
The code which i used from the xilinx template is always @(posedge clk) begin if(we) RAM[addr1] <= in; out1 <= RAM[addr1];out2 <= RAM[addr2]; end This created a single block RAM with two out location and a single writ location. But if this is used with synpify with /* synthesis syn_ramstyle = "block_ram" */ directive will give distributed RAM. Then i tried synplify version reg [31:0] RAM [15:0] /* synthesis syn_ramstyle = "block_ram" */; always @(posedge clk) begin if(we) RAM[addr1] <= in; addr1_latch <= addr1; addr2_latch <= addr2; end out1 <= RAM[addr1_latch];out2 <= RAM[addr2_latch]; The above code created two block RAM for each output. It is using two port RAMs but in each RAM second port is unused.Article: 101240
Karel- I have fought with this also. My conclusion is that clock-oriented timing constraints, including CLOCK_SIGNAL constraint, are basically worthless for fabric signals. If there is any app note that truly shows how to use them on non-clock signals, I'd sure like to see it. Once you cause XST to "jump off the clock net" as Austin puts it, I don't know what you do to get back on it. If your clocks are truly free-running, and you never have to switch away from a dead clock, then simply use a BUFGMUX, as has been suggested many times on the group. You will be good. If one clock is free-running, but one may stop, then you might still be able to use a BUFGMUX by doing this: -Use I0 input on the BUFGMUX for the always-running clock -ensure the clock that could stop (on I1 input) always stops at a low-level -use a state-machine based on another, independent clock to measure the output of the BUFGMUX and determine when to switch Following these guidelines, a BUFGMUX appears to work without getting stuck. There may be some Xilinx evidence to support this, if you look carefully at behavior of a BUFGCE, which is a BUFGMUX with I1 tied to GND and sel line used as CE (enable). With a BUFGCE, the CE line can be toggled any time. I am in the process of testing the above use of a BUFGMUX billions of times to verify. I would like to take this opportunity to ask that Peter and Austin to let us know exactly what is the circuitry in the BUFGMUX. It's too important a feature to be guessing how it behaves when its inputs are not always running. If it does not exactly match Peter's Six Easy Pieces circuit, which I suspect it does not based on what I've heard on this group, then we should know the modifications. If this info is covered under NDA that should be Ok as people who need to know should have no trouble to go that route. -JeffArticle: 101241
Steve, Peter and Austin- Can you give definitive instructions -- or point me to who can -- for initializing an array of registers? I need to initialize an array of registers in one way, set values differently upon Reset, and set values differently again during normal operation. I have this code: reg [31:0] array [11:0]; // synthesis attribute INIT of array is 64'h0C0001820C000080; which is intended to initialize the first 2 elements of the array, but doesn't set any bits although XST appears to "accept" the INIT attribute. What is needed? The whole 384 bits set with one number? Can this be done using 384'hxxxx... syntax? I have opened a webcase through our local FAE, but after about 3 weeks have no clear answers other than to instantiate a RAM using CoreGen and use a .coe file, or read in a .dat file for simulation purposes but not synthesis. To use a RAM I would have to switch the array row/column to get XST to recognize asynchronous reads, and I have done that in other cases, but in this case I can't because I actually need 32-bit registers (they are accessible via a host processor). For something like array initialization, there has to be a solid answer -- hopefully some actual code showing how to do it. Thanks. -JeffArticle: 101242
Eric Smith wrote: > fpga_toys@yahoo.com writes: > > We have modems today that broke the modulation "laws" set in 60's ... > > by more than an order of magnitude. > > What "laws" are those? AFAIK, today's modems are still subject to the > Shannon-Hartley Theorem and the Nyquist Sampling Theorem, which > substantially predate the 1960s. Perhaps these putative "laws" from > the 1960s were promulgated by people with little understanding of > information theory? Without a doubt. "Information theory" and achievable practice would take several decades to mature in the progression of doctorial study, general graduate study, and finally in the 1970's become mainstream undergraduate material. But much more than that, was a steady progression of improvements in the transmission channel, which started out with attachment limitations of acoustical connection to the carbon mics on 500 desk sets that were the norm in the mid 1960's and into the 1970's. Carbon packing, analog voice bandwidth with loading coils of around 5 KHz, purely analog encode/decode circuits, etc ... all let if common knowledge that the upper limit on consumer modems was well under 1200 baud. Direct connections via a Data Access Adapter would cost another $100/mo and get you up to 1200 baud for a Vadic, or even 1800 or 2400 baud for a $5,000 modem in the mid 1970's. The transition to 2500 desksets during the late 1960's and 1970's allowed for lower loop currents, fewer low pass filters (loading coils) as the 500 desk sets were phased out. With that came slightly higher voice bandwidth, and a new generation of modems which capitalized on it where the new lines with less cross talk and fewer loading coils where installed. Deregulation of the customer interface, led to a rapid evolution of very low current electronic handsets, and improvements in the cross talk .... plus the ability for consume modems to directly attach to the line. By the mid 1980's bandwidth was available to support 2400-9600 baud modems, and with cheap microprocessors a host of digitial modem technologies emerged, and a series of digital signal processing modems (Including Telebit Trailblazers, and advances phase encoding were developed) which could take advantage of the cleaner phone lines with higher bandwidth in many areas. As digital exchanges became the norm, so did bandwidth, and the evolution to 56K today. So yes, there were plenty of people who understand the limits of the technology of the day, and were quite able to state what the best achievable modem bandwidth were for that state of the Bell system. But the telephone system improved, digitial technologies emerged, and with it those limits continued to be broke with advancements in the medium, coding technology, compression, error correction, etc .... none of which were visible in the 1960's or 1970's when everyone "knew" just what the fastest modem was that could be built assuming best case engineering practices. Just as theory existed that man could fly some 1,000 years before the Wright brothers actually proved it, Shannons visionary work didn't enable 56kbps modems to be even dreamed of in 1948, when the regulatory, technology, and bandwidth limits were only a few kbps using best possible practice, and a huge fraction of that in reality.Article: 101243
vssumesh wrote: > The code which i used from the xilinx template is > always @(posedge clk) > begin > if(we) RAM[addr1] <= in; > out1 <= RAM[addr1];out2 <= RAM[addr2]; > end > This created a single block RAM with two out location and a single writ > location. > But if this is used with synpify with /* synthesis syn_ramstyle = > "block_ram" */ directive will give distributed RAM. > Then i tried synplify version > > reg [31:0] RAM [15:0] /* synthesis syn_ramstyle = "block_ram" */; > always @(posedge clk) > begin > if(we) RAM[addr1] <= in; > addr1_latch <= addr1; addr2_latch <= addr2; > end > out1 <= RAM[addr1_latch];out2 <= RAM[addr2_latch]; > > The above code created two block RAM for each output. It is using two > port RAMs but in each RAM second port is unused. It may seem the same, but try // (using your own dimensions) wire [n:0] RAM_addr1 /* synthesis syn_keep = 1*/ = RAM[addr1]; wire [n:0] RAM_addr2 /* synthesis syn_keep = 1*/ = RAM[addr2]; always @(posedge clk) begin if(we) RAM[addr1] <= in; out1 <= RAM_addr1; out2 <= RAM_addr2; end ______________________ The template I recall includes the write as you show but references by wires - the syn_keep may not be needed at all but there may be other aspects of your code that trip up the system. If your address range is small, try arbitrarily making it BlockRAM sized. I'll glance at it again when I'm in front of my synthesizer.Article: 101244
Eric Smith wrote: > fpga_toys@yahoo.com writes: > > We have modems today that broke the modulation "laws" set in 60's ... > > by more than an order of magnitude. > > What "laws" are those? AFAIK, today's modems are still subject to the > Shannon-Hartley Theorem and the Nyquist Sampling Theorem, which > substantially predate the 1960s. Perhaps these putative "laws" from > the 1960s were promulgated by people with little understanding of > information theory? And by the way ... I still own 3 of my first 4 modems ... I sold my 110baud ASR33 with built in coupler back in the early 1979's to buy my 300 baud TI "Silent" 700 and an original AJ Oak box 300 baud modem (I still have the AJ). I also still own my Vadic 1200 green shoebox, that connected via a Pacific Bell installed DAA from 1977 to 1979 so I could dial long distance into work from San Luis Obispo to Menlo Park. I also still own my two Telebit Trailblazers that replaced it a few years later. Even as late as 1999 my rural home phone lines would only support 14.4Kbaud connection rates on a good day ... and 9,600 more than likely, so the Telebits remained useful for years after they were out of normal use. That year I started a wireless internet cooperative to get broadband via early Aironet 802.11b radios costing $1,200/user for UC4800, LMR600 and Conifer T24 dishes. I also still have my 1976 home computer (LSI11/03) and my 1980 home computer (LSI11/23 with V7 Unix) and the Fortune 32:16 and LSI11/73 which replaced it. I also still own the TRS80 Model 1 that used to develop the Z80 firmware for the Ampex TMS100 9-track tape formatter I built for the LSI11/03 in 1978 as well, and a lot of other interesting period toys ... including a desktop smoked plexiglass PDP-8 lab machine, a pair of LSI11/03 based VT71 DEC terminals, a modest collection of ADM3's, LA21 DecWriters, and other period toys. I wish I had purchased the 1401 tape system I first programmed on when it was salvaged :) Or one of the Bendix G15's I used later. But that is another story :)Article: 101245
Yes, here is the PCI signal I have for the PCI interface section of the design : entity pcim_top is port (-- PCI ports; do not modify names! AD : inout std_logic_vector(63 downto 0); CBE : inout std_logic_vector( 7 downto 0); PAR : inout std_logic; PAR64 : inout std_logic; FRAME_N : inout std_logic; REQ64_N : inout std_logic; TRDY_N : inout std_logic; IRDY_N : inout std_logic; STOP_N : inout std_logic; DEVSEL_N : inout std_logic; ACK64_N : inout std_logic; IDSEL : in std_logic; INTR_A : out std_logic; PERR_N : inout std_logic; SERR_N : inout std_logic; REQ_N : out std_logic; GNT_N : in std_logic; RST_N : in std_logic; PCLK : in std_logic; As I'm just getting to PCI for this design, can someone quickly resume what this process ( 64 bits design on a 64/32 bits interface ) is about ? What is involved, what has to be added, what has to be watched compared to a "normal" 64 bits design ( which I have ). Thanks for your help ! Stéphane. "Jeff Brower" <jbrower@signalogic.com> a écrit dans le message de news: 1146177167.571111.263600@j33g2000cwa.googlegroups.com... Stéphane- > My question is : Is the [Xilinx PCI core] able to transparently handle the fact that the > board is plugged in a 32 or 64 bit PCI slot ? Do you see see REQ64 and ACK64 or similar signals on the module interface? -JeffArticle: 101246
Jim Granville schrieb: > Most common for human-view applications is PWM or PDM to drive the LED, > but that can give EMC issues, if you have large LED currents. You can route the current in alternating loops. That should remove almost all EMC issues. Kolja SulimmaArticle: 101247
Hi There, I just generated a Aurora sample design that communicates between 2 MGT's using Coregenerator. How do I configure two specific MGT's to be used in the design (say MGT4 & MGT9). I tried to use PACE to assign I/O's of the Aurora design to the pins on the board. But, the MGT pins are disabled. (color coded:Brown and the legend:Gigabit serial) How do I assign the assign the TX signals (TX_N & TX_P) and RX (RX_N & RX_P) to the MGTs? Thx in advance, BilluArticle: 101248
>Using a mux to select between two different clocks can cause you a lot >of trouble, because you must avoid output glitches. >I published an absolutely safe solution as the final item in "Six eeasy >pieces" as TechXclusives (search for it on the Xilinx website.) >That design is guaranteed to switch glitchfree between two free-running >clocks, no matter when or how you activate the Select input. >Peter Alfke, Xilinx Applications > > Hello Peter, I think I shouldn't care about the glitches: the datapath is in reset when a clock switching occurs. The reset is deactivated once the switch has changed state. My biggest problem are the constraints; how should I constrain the datapath in the three different clock domains? Thanks and best regards, Karel DeprezArticle: 101249
Thanks for the information. I've just looked in the user manual to the sections you mention and it explaines how my problem should be handled. It is to the used to handle this using the M_FAIL64 signal. I looked to the actual design I have to modifiy and it seems to be handled. So I guess I have the answer to my question : NO, the Xilinx IP doesn't handle it transparently, it is of the user's responsability YES, the actual design I have handles it Thanks for your help. Stéphane. "John_H" <johnhandwork@mail.com> a écrit dans le message de news: 2o64g.8392$tT.7078@news01.roc.ny... > Read the Logicore PCI v3.0 User Guide, Initiator 64-bit Extension, > Additional Considerations, Monitor the Target Response. > > It appears the PCI spec requires that the user handle the switch from a > desired 64-bit initiator transaction as a 32-bit sequence after the original > attempt at a 64-bit transaction is terminated by the target. > > > "sjulhes" <t@aol.fr> wrote in message > news:44509f8a$0$12862$626a54ce@news.free.fr... > > Hello, > > > > We have a design with master PCI DMA in 64 bits. This design implements > > the > > xilinx PCI 64/32 bits IP. > > The problem is that our board should be plugged in various versions of the > > PCI bus ( 64 bits/ 66 Mhz and 32 bits/ 33 mhz, 32 bits / 66 Mhz ). > > > > My question is : Is the IP abble to transparently handle the fact that the > > board is plugged in a 32 or 64 bit PCI slot ? > > > > The goal is to have only one design that feeds the PCI master DMA section > > with 64 bits data whether the bus is 32 or 64 bits wide. > > > > What are the limitations with this IP ? > > Are there specifics configuration or design stuffs to add ? > > > > Thanks for your feedbacks. > > > > Stéphane. > > > > > > > > > >
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z