Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi, I connected the interrupt controller, uartlite and timer module to the microblaze in XPS. I setup the timer to have higher priority that the rs232 in the interrupt controller and then set both interrupts as level triggered using the following parameter PARAMETER C_KIND_OF_INTR = 0b00 The connection is as follows: PORT Intr = rs232_Interrupt&ten_us_timer_Interrupt During compilation though, xps is overriding these parameters: WARNING:MDT - IPNAME:interrupt_controller INSTANCE:xps_intc - C:\perforce\depot\eni\eng\projects\DEV\smykabox\microblaze_test \microblaze_test.mhs line 222 - PARAMETER C_KIND_OF_INTR has value 0b00 specified in MHS, but tcl is overriding the value to 0b00000000000000000000000000000010 INFO:MDT - IPNAME:interrupt_controller INSTANCE:xps_intc - C:\Xilinx\10.1\EDK\hw\XilinxProcessorIPLib\pcores\xps_intc_v1_00_a \data\xps_i ntc_v2_1_0.mpd line 50 - tcl is overriding PARAMETER C_KIND_OF_EDGE value to 0b00000000000000000000000000000010 INFO:MDT - IPNAME:interrupt_controller INSTANCE:xps_intc - C:\Xilinx\10.1\EDK\hw\XilinxProcessorIPLib\pcores\xps_intc_v1_00_a \data\xps_i ntc_v2_1_0.mpd line 51 - tcl is overriding PARAMETER C_KIND_OF_LVL value to 0b00000000000000000000000000000001 The override would mean that the uart interrupt is rising edge triggered but the timer interrupt is high level triggered. The data sheet for the uart clearly says that the receive interrupt is high level triggered and the sending interrupt is rising edge triggered. I am not sure why xps is overriding my original settings. Thanks for the help understanding this AmishArticle: 139851
Well, I had to reinstall all the Suse 10.3 system. After standard software installation and package update, I had test again the installation (as root) of IDE & EDK 10.1 with SP3 and it did success. So, they might be something missing in the original installation, I cannot say what, but just in case someone got the same problem, just test again after a system update (as root).Article: 139852
rickman <gnuarm@gmail.com> wrote: >On Apr 15, 4:19 pm, jean-francois hasson <jfhas...@club-internet.fr> >wrote: >> Hi, >> >> We are looking at interfacing the Cyclone III EP3C40 with an SDRAM at 90 >> MHz. We are considering having the FPGA generate the clock to the >> interface and find a way to ensure both the sdram and the cyclone III >> are in phase regarding the clock. We could not find up to now a >> mechanism that would ensure that both the SDRAM and the cyclone III will >> have their clock almost with the same phase relationship over >> temperature, voltage and process. Could someone provide us with >> indications regarding the implementation ? >> Today we thought of using a specific PLL to generate a delayed clock > >Still, if you need to reduce that, you can have the FPGA generate the >clock, routing it to the SDRAM with a trace of a known length. Also >route a second, identical clock output back to the FPGA with the same >length trace. This may require some serpentine coils, but with a >short route it shouldn't use too much board space. Then both devices >will be receiving an external clock that is totally in phase other >than any skew generated internally in the FPGA. Totally useless. You don't need to synchronise the internal FPGA clock with the SDRAM. Just clock the SDRAM from the FPGA and make sure you meet setup and hold times on both FPGA and SDRAM. Thats all that matters. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... "If it doesn't fit, use a bigger hammer!" --------------------------------------------------------------Article: 139853
On Apr 16, 12:46 pm, n...@puntnl.niks (Nico Coesel) wrote: > rickman <gnu...@gmail.com> wrote: > >On Apr 15, 4:19 pm, jean-francois hasson <jfhas...@club-internet.fr> > >wrote: > >> Hi, > > >> We are looking at interfacing the Cyclone III EP3C40 with an SDRAM at 90 > >> MHz. We are considering having the FPGA generate the clock to the > >> interface and find a way to ensure both the sdram and the cyclone III > >> are in phase regarding the clock. We could not find up to now a > >> mechanism that would ensure that both the SDRAM and the cyclone III will > >> have their clock almost with the same phase relationship over > >> temperature, voltage and process. Could someone provide us with > >> indications regarding the implementation ? > >> Today we thought of using a specific PLL to generate a delayed clock > > >Still, if you need to reduce that, you can have the FPGA generate the > >clock, routing it to the SDRAM with a trace of a known length. Also > >route a second, identical clock output back to the FPGA with the same > >length trace. This may require some serpentine coils, but with a > >short route it shouldn't use too much board space. Then both devices > >will be receiving an external clock that is totally in phase other > >than any skew generated internally in the FPGA. > > Totally useless. You don't need to synchronise the internal FPGA clock > with the SDRAM. Just clock the SDRAM from the FPGA and make sure you > meet setup and hold times on both FPGA and SDRAM. Thats all that > matters. -------------------------------------------------------------- Isn't that a little bit like saying to make money in the stock market, you just buy low and sell high? The point is *how* you meet setup and hold. The OP has said that his timing margins are tight and he has to be concerned with PVT variations. Sourcing the clock from the FPGA is not a magic bullet. In fact, I think doing that without routing it back to a clock pin makes it very difficult to even determine the timing of the I/O signals to the output clock, much less assure that it meets requirements of the SDRAM. FPGAs have clear specs on timing relative to a clock *input*. I have never seen specs on timing of I/ Os relative to a clock *output*. RickArticle: 139854
On Apr 16, 4:24=A0am, "oliver.hofh...@googlemail.com" <oliver.hofh...@googlemail.com> wrote: > Hi everybody, > > in my design i have a timing problem with an ADC. I have this problem > since my design has become more dense: > This is the ADC I'm using: AD677 (http://www.analog.com/static/ > imported-files/data_sheets/AD677.pdf) > > My ADC-Entity has 3 inputs and 3 outputs. (see datasheet) > > BUSY: IN > SCLK: IN > SDATA: IN > > CAL: OUT > CLK: OUT > SAMPLE: OUT > > How do i now define the relationship between the CLK-output and the > SCLK-input for example? In the datasheet are the timing- > specifications. > > Abstract of my *.UCF file: > > NET =A0 =A0 ADC1_BUSY =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 C25; > NET =A0 =A0 ADC1_SCLK =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 E24; > NET =A0 =A0 ADC1_SDATA =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 B24; > > NET =A0 =A0 ADC1_CAL =A0 =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 E14; > NET =A0 =A0 ADC1_CLK =A0 =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 D11; > NET "ADC1_SAMPLE" LOC =3D F14; > > I hope somebody with some experience in constraining a design can give > me some hints. > The only timing constraint i'm using right now is the period > constraint for the 100 MHz-clock i'm using. > I'm an absolute beginner in timing-constraint.... In advance thanks a > lof for your help. > > sincerely yours > > Olli > > Here the whole UCF-File: > > CONFIG STEPPING =3D "2"; > > NET =A0 =A0 clk =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 LOC =A0 =A0 =3D = =A0 =A0 =A0 B13; > > NET =A0 =A0 ADC_reset =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 G9; > NET =A0 =A0 ADC1_BUSY =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 C25; > NET =A0 =A0 ADC1_CAL =A0 =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 E14; > NET =A0 =A0 ADC1_CLK =A0 =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 D11; > NET =A0 =A0 ADC1_SCLK =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 E24; > NET =A0 =A0 ADC1_SDATA =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 B24; > NET =A0 =A0 ADC2_BUSY =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 F24; > NET =A0 =A0 ADC2_CAL =A0 =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 D13; > NET =A0 =A0 ADC2_CLK =A0 =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 D14; > NET =A0 =A0 ADC2_SAMPLE =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 C11; > NET =A0 =A0 ADC2_SCLK =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 C26; > NET =A0 =A0 ADC2_SDATA =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 E23; > > NET =A0 =A0 DEACTIVATE_N =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 N25; > NET =A0 =A0 MESS_DONE =A0 =A0 =A0 =A0 =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0= =A0 L26; > NET =A0 =A0 MESS_ENABLE =A0 =A0 =A0 =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 = =A0 E2; > NET =A0 =A0 M_RESET =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 = =A0 =A0 E1; > > NET =A0 =A0 DIP_STRING =A0 =A0 =A0 =A0 =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0= =A0 A4; > NET =A0 =A0 DIP_TD_READMODE LOC =A0 =A0 =3D =A0 =A0 =A0 B6; > NET =A0 =A0 DOUT =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0LOC =A0 =A0 =3D = =A0 =A0 =A0 C24; > > NET =A0 =A0 Druck_VCC =A0 =A0 =A0 =A0 =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0= =A0 F11; > NET =A0 =A0 DIN =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 L= OC =A0 =A0 =3D =A0 =A0 =A0 A23; > NET =A0 =A0 MCLK =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0LOC =A0 =A0 =3D = =A0 =A0 =A0 F16; > NET =A0 =A0 SCLK =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 B23; > > NET =A0 =A0 MITTLUNG_LED =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 V25; > NET =A0 =A0 MITTLUNG0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0= =A0 B4; > NET =A0 =A0 MITTLUNG1 =A0 =A0 =A0 =A0 =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0= =A0 C6; > > NET =A0 =A0 modell_MESSUNG =A0LOC =A0 =A0 =3D =A0 =A0 =A0 G10; > > NET =A0 =A0 SER_IN_0 =A0 =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 U4; > NET =A0 =A0 SER_IN_1 =A0 =A0 =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 AA11; > NET =A0 =A0 SER_OUT_0 =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 V4; > NET =A0 =A0 SER_OUT_1 =A0 =A0 =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 AC11; > NET "SER_OUT_2" LOC =3D AC15; > NET "SER_IN_2" LOC =3D AC14; > NET =A0 =A0 SHIFT_5_TO_3<7> =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 AC12; > NET =A0 =A0 SHIFT_5_TO_3<6> =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 AA13; > NET =A0 =A0 SHIFT_5_TO_3<5> =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 AD13; > NET =A0 =A0 SHIFT_5_TO_3<4> =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 AB13; > NET =A0 =A0 SHIFT_5_TO_3<3> =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 AC13; > NET =A0 =A0 SHIFT_5_TO_3<2> =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 AA14; > NET =A0 =A0 SHIFT_5_TO_3<1> =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 AD14; > NET =A0 =A0 SHIFT_5_TO_3<0> =A0 LOC =A0 =A0 =3D =A0 =A0 =A0 AB14; > NET =A0 =A0 TASTER1 LOC =A0 =A0 =3D =A0 =A0 =A0 D2; > NET =A0 =A0 TASTER2 LOC =A0 =A0 =3D =A0 =A0 =A0 D1; > NET =A0 =A0 TD_out =A0LOC =A0 =A0 =3D =A0 =A0 =A0 F12; > NET =A0 =A0 TD_out2 LOC =A0 =A0 =3D =A0 =A0 =A0 F13; > NET =A0 =A0 TD_VCC =A0LOC =A0 =A0 =3D =A0 =A0 =A0 D16; > NET =A0 =A0 TD_VCC2 LOC =A0 =A0 =3D =A0 =A0 =A0 D15; > NET =A0 =A0 URX_LED LOC =A0 =A0 =3D =A0 =A0 =A0 K26; > NET =A0 =A0 URX_TX_go_to =A0 =A0LOC =A0 =A0 =3D =A0 =A0 =A0 C10; > NET "clk" TNM_NET =3D clk; > TIMESPEC TS_clk =3D PERIOD "clk" 10 ns; > NET "mittlung_LED" LOC =3D V25; > NET "ADC1_SAMPLE" LOC =3D F14; > NET "EIN_kHZ" LOC =3D AA15; > > NET "EIN_kHz" LOC =3D AA15; > NET "pin_x" LOC =3D AD25; > NET "pin_y" LOC =3D AC19; > NET "pin_z" LOC =3D AD19; > NET "sync" LOC =3D AC16; > NET "sendclk" LOC =3D AA16; There is no relationship. I think clk and sclk are essentially asynchronous to each other as seen from the FPGA since you can't be sure of the delay on the board for each signal.Article: 139855
On 15 Apr., 13:41, Ben_Quem <B_quemen...@hotmail.com> wrote: > Someone else told me lower value for the worst case slack is > acceptable ,he told me a 0.00000x ns slack will work, because the > vendor already include margin in the calculation for the worst slack. > Could someone confirm this ? or could someone explain which rule they > are using to validate timing closure ? By definition a slack of zero is acceptable. The only reason to target a higher slack would be to account for unknown timing error sources. In most systems that is not necessary because the clock parameters are well characterized. The timing report of the ISE tells you what effects are included in the calculations. These contain at least on chip jitter sources and clock skew and the worst case supply voltage and temperature values that the chip will run at. If I recall correctly the default setting also asumes a certain input clock jitter. Also, doing the computation at the worst case settings allready gives you lots of safety margins as it is very unlikely that all parameters are worst case simultaneously. Kolja SulimmaArticle: 139856
On Thu, 16 Apr 2009 10:32:34 -0300, Walter Gallegos <wsfpga@adinet.com.uy> wrote: >wooster.berty@gmail.com escribió: >> On Apr 13, 10:11 am, ivan <ivan.so...@gmail.com> wrote: >>> Hi, >>> I've been playing around with ISE 10.1 (and also 9.2i) and Spartan 3- >>> AN. >>> When I open Project Navigator, synthesize my design and program it >>> into FPGA using iMPACT, everything works fine. But, if I change >>> something in the design (like, for instance, turn on the LED that was >>> previously off), and start iMPACT again (it offers me to choose the >>> bit file, as it did the first time), for some reason it only downloads >>> the old design to the board, no matter that I chose the new bit file. >>> Things work normally when I close Xilinx altogether and start it up >>> again. >>> >>> Is this a bug, or did I skip some setup options? >>> >>> Please help, it's REALLY annoying to have to turn of the hole ISE >>> everytime I wan't to try something new! >>> >>> Thanks, >>> Ivan. >> >> Every so often we have the same issue. (the bit file is fine, so does >> the time stamp, the issue we belive have to do with cashing though not >> sure if impact or window). >> just close impact and reopen it. no need to close ise just impact. >> BW. > >I see a similar issue some time and with specific Windows installations, >but I'm not sure if a ISE bug or a file handle system bug. > >By some obscure reason - as a file system buffer actualization bug - >when you run the implementation chain tools ISE use an older copy of >your sources, then your bitstream has the correct time stamp but was >generated from older sources. > >I see also that this issue was more frequent long debug sessions without >shut-down the machine. > >One way to check this is Close ISE -> Restart Windows -> Open ISE and >rerun all implementation and compare results. > >Additional info : I never catch this issue with our actual work points > >ISE 10.1.03 Windows XP SP3 3 GB RAM >ISE 10.1.03 Ubuntu 8.04 1 GB RAM >ISE 10.1.03 Ubuntu 8.10 1 GB RAM > >but was a "know issue" to us with > >ISE 8.2 Windows XP SP2 1 GB RAM > >Walter > > > I wonder if simple fix would be to delete the bit file before building.Article: 139857
Hi, I am having some issues with xilinx SDK. For some reason whenever I move my mouse over some text in my code file, the following appears in my console: No symbol "XST_DEVICE_IS_STARTED" in current context. 1) I would like to know how to turn that off since it does not seem like the env 2) Also if I right click on a function and choose "Open declaration" or "open definition", the program cannot locate the appropriate function. Any clue how to make these things work. AmishArticle: 139858
On Apr 16, 4:24=A0am, "oliver.hofh...@googlemail.com" <oliver.hofh...@googlemail.com> wrote: > Hi everybody, > > in my design i have a timing problem with an ADC. I have this problem > since my design has become more dense: > [...] > How do i now define the relationship between the CLK-output and the > SCLK-input for example? In the datasheet are the timing- > specifications. > CLK to SCLK varies from 100 to 300 ns. Your clock has a period of 10ns, so I am not sure what you want to accomplish with timing constraints. How do you communicate with the ADC? -MikeArticle: 139859
Hi there, we have an older FPGA project with a Synopsys dc_shell - Xilinx ISE design flow. There are dc_shell scripts that used to work well, but now the command syntax has obviously changed to TCL in version B-2008.09. That is not a problem. But, when I try to do write -f edif I get the error message: Error: format EDIF not supported in XG mode. Please contact your Synopsys support if you need assistance. (XG-104) How can I create an EDIF file with the new Design Compiler? Or is there a way to use the .ddc files with Xilinx? Thank You very much for Your advice, MarkusArticle: 139860
On 17 Apr., 00:20, mng <michael.jh...@gmail.com> wrote: > On Apr 16, 4:24=A0am, "oliver.hofh...@googlemail.com" > > <oliver.hofh...@googlemail.com> wrote: > > Hi everybody, > > > in my design i have a timing problem with an ADC. I have this problem > > since my design has become more dense: > > [...] > > How do i now define the relationship between the CLK-output and the > > SCLK-input for example? In the datasheet are the timing- > > specifications. > > CLK to SCLK varies from 100 to 300 ns. Your clock has a period of > 10ns, so I am not sure what you want to accomplish with timing > constraints. How do you communicate with the ADC? > > -Mike Hi, the ADC is directly connected to the FPGA-Pins. The communication is realized with a state machine that runs in the fpga.Article: 139861
On Apr 16, 11:13=A0am, gert1999 <ggd...@gmail.com> wrote: > Hi all > > I have trouble with a device from Xilinx, the Coolrunner II CPLD > Starter Kit. > The FPGA is XC2C256-7TQG144C. =A0It was delivered with a resource CD > (containing the > required drivers) and the ISE10.1 design suite > > Take a look athttp://www.xilinx.com/products/devkits/SK-CRII-L-G.htm > for a complete description > > The device connects to the PC with a simple USB-cable, nothing else is > required > I followed instructions for installation as provided in the manual. > USB-Drivers have been > correctly installed. > > What happens: > > From iSE or even from Impact as stand alone application I try to > communicate with the > board. =A0All settings are as displayed in the manual > > Impact says: > > Source driver files not found. > The Platform Cable USB is not detected. Please connect a cable.If a > cable is connected, > please disconnect and reconnect to the usb port, follow the > instructions in the 'Found > New Hardware Wizard', then retry the Cable Setup operation. > Cable connection failed ... > > Reconnecting is not resulting in a message 'Found New Hardware' as > everything is properly installed > (usb drivers before plugging in the device) > I tried on another computer with the same result. > > Can anyone solve this problem ? > > Thanks a lot > > Gert According to page 4 of the manual (http://www.xilinx.com/support/ documentation/boards_and_kits/ug501.pdf), you can't use iMPACT and the USB cable, you have to use the Digilent program: "Configuration files can be transferred to the CoolRunner-II Evaluation Board using a USB cable and the CoolRunner-II Utility Window software, or using an external programming cable (not provided) and Xilinx=92s iMPACT software. If using the Xilinx programming cable and iMPACT, attach the JTAG leads to the pins on J8." -Dave PollumArticle: 139862
Hello, I am having the following code for FIFO. When I try to synthesize the verilog code on ISE targetting BRAM it throws following warning: INFO:Xst:1788 - Unable to map block <fifo> on BRAM. Output FF <full_r> does not have same control signals as <empty_r>. Can anyone help me to resolve this problem? Thanks! Code:: module fifo(write_enb,read_enb, data_in, data_out, empty,full, clk); input clk; input write_enb, read_enb; input [(`WIDTH-1):0] data_in; output [(`WIDTH-1):0] data_out; output empty, full; // Output registers reg empty_r, full_r; reg [(`WIDTH-1):0] data_out_r; // Internal registers integer write_ptr, read_ptr; reg [(`WIDTH-1):0] ram [(`DEPTH-1):0]; reg do_write, do_read; always @ (posedge clk) begin do_read = read_enb == 1'b1 && empty == 1'b0; do_write = write_enb == 1'b1 && full == 1'b0; if (do_read) begin data_out_r <= ram[read_ptr]; read_ptr <= (read_ptr + 1) % `DEPTH; full_r <= 1'b0; if (!do_write && (read_ptr + 1) % `DEPTH == write_ptr ) empty_r <= 1'b1; end // if if (do_write) begin ram[write_ptr] = data_in; write_ptr <= (write_ptr + 1) % `DEPTH; empty_r <= 1'b0; if (!do_read && read_ptr == (write_ptr + 1) % `DEPTH ) full_r <= 1'b1; end // if end // always assign empty = empty_r; assign full = full_r; assign data_out = data_out_r; // This should be sufficient for no Xs to leak out from ram. initial begin write_ptr = 1'b0; read_ptr = 1'b0; data_out_r = 8'b0; empty_r = 1'b1; full_r = 1'b0; end endmoduleArticle: 139863
On 17 apr, 12:58, Dave Pollum <vze24...@verizon.net> wrote: > On Apr 16, 11:13=A0am, gert1999 <ggd...@gmail.com> wrote: > > > > > Hi all > > > I have trouble with a device from Xilinx, the Coolrunner II CPLD > > Starter Kit. > > The FPGA is XC2C256-7TQG144C. =A0It was delivered with a resource CD > > (containing the > > required drivers) and the ISE10.1 design suite > > > Take a look athttp://www.xilinx.com/products/devkits/SK-CRII-L-G.htm > > for a complete description > > > The device connects to the PC with a simple USB-cable, nothing else is > > required > > I followed instructions for installation as provided in the manual. > > USB-Drivers have been > > correctly installed. > > > What happens: > > > From iSE or even from Impact as stand alone application I try to > > communicate with the > > board. =A0All settings are as displayed in the manual > > > Impact says: > > > Source driver files not found. > > The Platform Cable USB is not detected. Please connect a cable.If a > > cable is connected, > > please disconnect and reconnect to the usb port, follow the > > instructions in the 'Found > > New Hardware Wizard', then retry the Cable Setup operation. > > Cable connection failed ... > > > Reconnecting is not resulting in a message 'Found New Hardware' as > > everything is properly installed > > (usb drivers before plugging in the device) > > I tried on another computer with the same result. > > > Can anyone solve this problem ? > > > Thanks a lot > > > Gert > > According to page 4 of the manual (http://www.xilinx.com/support/ > documentation/boards_and_kits/ug501.pdf), you can't use iMPACT and the > USB cable, you have to use the Digilent program: > "Configuration files can be transferred to the CoolRunner-II > Evaluation Board using a USB cable and the CoolRunner-II Utility > Window software, or using an external programming cable (not provided) > and Xilinx=92s iMPACT software. If using the Xilinx programming cable > and iMPACT, attach the JTAG leads to the pins on J8." > -Dave Pollum Hello Dave Thanks a lot for help. I am new to FPGA. I was following the tutorial example in another manual and following the instructions I supposed something to be wrong. It's clearly written on page 4 as you indicate in the other reference manual. Hope it will work out fine ... GertArticle: 139864
On Mar 30, 4:47=A0am, Tommy Thorn <tommy.th...@gmail.com> wrote: > [Self-follow up is bad form, I know ...] > > A 10 min experiment to change YARI to an MCU configuration (Harvard > style split instruction and data memory, and delete multiply and > divide support) slimmed it down to 2,700 LE on an Altera EP1C12. I > suspect it would be easy to get it down < 2,400 LE. > > Note, size was never a priority in the original design. > > Tommy Hi your verilog is ICARUS verilog, and ALTERA verilog but not fully portable verilog fixed 4 different issues that prevent your code to pass synthesis with XST but still some problems to solve :( AnttiArticle: 139865
here's the pinout of a board I'm using: # chipset: # xc3s500e-4cp132 # # 3s500E # xxxxx-xxxx # korea # C6-DGQ 4C # # apparently the c6 defines the packages as CPG132 as per # xilinx document ds312.pdf # 1 --|DGND 5V IN |-- 40 # 2 --|DGND DGND |-- 39 # 3 --|PIN3 dual PIN38 |-- 38 dual/gclk # 4 --|PIN4 dual PIN37 |-- 37 dual # 5 --|PIN5 rhclk/dual PIN36 |-- 36 I/O # 6 --|PIN6 rhclk/dual PIN35 |-- 35 dual/gclk # 7 --|PIN7 rhclk/dual PIN34 |-- 34 lhclk # 8 --|PIN8 rhclk/dual PIN33 |-- 33 lhclk # 9 --|PIN9 rhclk/dual PIN32 |-- 32 I/O # 10 --|PIN10 I/0 PIN31 |-- 31 lhclk # 11 --|PIN11 dual PIN30 |-- 30 I/O # 12 --|PIN12 dual PIN29 |-- 29 lhclk # 13 --|PIN13 lhclk PIN28 |-- 28 lhclk # 14 --|PIN14 rhclk/dual PIN27 |-- 27 I/O # 15 --|PIN15 dual PIN26 |-- 26 I/O # 16 --|PIN16 gclk PIN25 |-- 25 I/O # 17 --|PIN17 gclk PIN24 |-- 24 vref # 18 --|PIN18 lhclk PIN23 |-- 23 I/O # 19 --|DGND DGND |-- 22 # 20 --|DGND DGND |-- 21 I checked against the Xilinx documentation for the type associated with the pins so I mapped them out using xilinx ds312.pdf and came up with this type configuration for CP132 Ball. It seems what I put out on the output pins is relevant. Pin15 has a 1/4 second blink code, while pins 17 and 18 have a 2mhz signal going out and all is OK. If I swap pins 15 and 18, I lock up. IF I drive Pin 13 with the 2mhz signal, no problem also. Simply put, if I drive PIN13 or PIN18 (the lhclk types) at 2mhz, no problem, but if I drive them at 4hz, they lock up the fpga. Any insight or suggested reading/ places to look in the multitude of summary reports that ISE generates? I found it strange that nothing strange appears in the build process or in testbench, but when I download the program, the FPGA locks up and only a power down and up resets it.Article: 139866
rickman <gnuarm@gmail.com> wrote: >On Apr 16, 12:46 pm, n...@puntnl.niks (Nico Coesel) wrote: >> rickman <gnu...@gmail.com> wrote: >> >On Apr 15, 4:19 pm, jean-francois hasson <jfhas...@club-internet.fr> >> >wrote: >> >> Hi, >> >> >> We are looking at interfacing the Cyclone III EP3C40 with an SDRAM at 90 >> >> MHz. We are considering having the FPGA generate the clock to the >> >> interface and find a way to ensure both the sdram and the cyclone III >> >> are in phase regarding the clock. We could not find up to now a >> >> mechanism that would ensure that both the SDRAM and the cyclone III will >> >> have their clock almost with the same phase relationship over >> >> temperature, voltage and process. Could someone provide us with >> >> indications regarding the implementation ? >> >> Today we thought of using a specific PLL to generate a delayed clock >> >> >Still, if you need to reduce that, you can have the FPGA generate the >> >clock, routing it to the SDRAM with a trace of a known length. Also >> >route a second, identical clock output back to the FPGA with the same >> >length trace. This may require some serpentine coils, but with a >> >short route it shouldn't use too much board space. Then both devices >> >will be receiving an external clock that is totally in phase other >> >than any skew generated internally in the FPGA. >> >> Totally useless. You don't need to synchronise the internal FPGA clock >> with the SDRAM. Just clock the SDRAM from the FPGA and make sure you >> meet setup and hold times on both FPGA and SDRAM. Thats all that >> matters. >-------------------------------------------------------------- > >Isn't that a little bit like saying to make money in the stock market, >you just buy low and sell high? The point is *how* you meet setup and >hold. > >The OP has said that his timing margins are tight and he has to be >concerned with PVT variations. Sourcing the clock from the FPGA is >not a magic bullet. In fact, I think doing that without routing it >back to a clock pin makes it very difficult to even determine the >timing of the I/O signals to the output clock, much less assure that >it meets requirements of the SDRAM. FPGAs have clear specs on timing >relative to a clock *input*. I have never seen specs on timing of I/ >Os relative to a clock *output*. Its simple: If you source the clock from the FPGA, then the I/O will have 0 delay with the clock edges (give or take the skew between output buffers). When capturing data you can determine a window in which the data from the SDRAM is stable (clock to SDRAM output delay + routing delay combined with the setup and hold time. You'll probably need 2 clocks with a phase shift (one for sending data to the SDRAM and one for capturing data from the SDRAM). A 90 degree shift will probably do. If you are unlucky you need a phase shifted clock for the SDRAM as well. All these clocks can be produced inside the FPGA. It goes without saying that the FPGA output and inputs must be inside the I/O buffer cell. The external SDRAM signals should not be routed inside the FPGA otherwise you'll be in a world of pain. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... "If it doesn't fit, use a bigger hammer!" --------------------------------------------------------------Article: 139867
http://news.prnewswire.com/ViewContent.aspx?ACCT=109&STORY=/www/story/03-31-2009/0004998410&EDATE= already 2 weeks? but how come there can exist 1000 designs, when first shipments made 31 march? also the number of EA customers 700 seems unlikely, if so then logistic has made miracles shipping to 700 customers in NO time and those already used the parts in 1000 designs? oh, well it only the news the xilinx way, it is still be seen whe ISE support for S6-V6 comes AnttiArticle: 139868
On Apr 17, 7:22=A0pm, Antti <Antti.Luk...@googlemail.com> wrote: > http://news.prnewswire.com/ViewContent.aspx?ACCT=3D109&STORY=3D/www/story= ... > > already 2 weeks? > > but how come there can exist 1000 designs, when first shipments made > 31 march? > also the number of EA customers 700 seems unlikely, if so then > logistic has made > miracles shipping to 700 customers in NO time and those already used > the parts > in 1000 designs? > > oh, well it only the news the xilinx way, it is still be seen whe ISE > support for > S6-V6 comes > > Antti eh that was copy of Xilinx news, just the PCB photo added, but funny, Xilinx PR says quote: "As scheduled, engineering samples of the Virtex-6 LX240T device are available now with general availability of remaining devices of both families on track for delivery in the second half of this year." Any way I read the above statement, it says that LX240T is GENERALLY available NOW as ES silicon, and other devices will follow. To my best knowledge this is not true, there is NO availability of any S6 or V6 to general audience. maybe need improve my reading skills. AnttiArticle: 139869
I added to the UCF and top.vhd references to pin13, then I deleted them, Now my Implement design fails on a reference to "pin13" I've tried deleting all references to "pin13" and did re-run all, yet I still see in the pinout report the signal pin13. It wasn't unitl I finally made a brand new project that I finally got rid of the reference to pin13. This leads me to believe that the "re- run all" isn't really re-running all. Anybody else have issues with this?Article: 139870
On Apr 17, 9:22=A0am, Antti <Antti.Luk...@googlemail.com> wrote: > http://news.prnewswire.com/ViewContent.aspx?ACCT=3D109&STORY=3D/www/story= ... Thanks for the link. That Xilinx Virtex(R)-6 LX240T Evaluation Board looks interesting. Visible features includes x4 PCIE, SODIMM, DVI, USB (host), Ethernet PHY, LCD, ... TommyArticle: 139871
On Apr 17, 12:40 pm, jleslie48 <j...@jonathanleslie.com> wrote: > I added to the UCF and top.vhd references to pin13, then I deleted > them, > > Now my Implement design fails on a reference to "pin13" I've tried > deleting all references to "pin13" and did re-run all, > > yet I still see in the pinout report the signal pin13. It wasn't > unitl I finally made a brand new project that I finally got > rid of the reference to pin13. This leads me to believe that the "re- > run all" isn't really re-running all. Anybody else have issues with > this? Ok, Project-> cleanup project files seems to clear this issue.Article: 139872
Hallo! Ok, I will not buy XSA-50 used from my professor. I am a very newbie in this world, so I would like some advice on wich board I can buy. Wich is the difference between a board and a programming kit? Wich the best choice for a newbie that want to start? Thank you very much! Best regards, Giuseppe RossiniArticle: 139873
On Apr 17, 7:54=A0pm, "Ged" <ciro.ross...@alice.it> wrote: > Hallo! Ok, I will not buy XSA-50 used from my professor. I am a very newb= ie > in this world, so I would like some advice on wich board I can buy. Wich = is > the difference between a board and a programming kit? Wich the best choic= e > for a newbie that want to start? Thank you very much! > > Best regards, > =A0 =A0 =A0 =A0 =A0 =A0 Giuseppe Rossini it depend, you can get kit (programmer + board) for about 59 USD but you need go shopping, make up budget first and decide at 59 usd options are not many, at a little higher price there will be more choices you, can also ask for sponsor board, even i may have some overleft :) AnttiArticle: 139874
On Apr 11, 4:29 pm, Mike Treseler <mtrese...@gmail.com> wrote: > jleslie48 wrote: > >>> I developed a message stream using a 32Mhz clock fpga ... > >>> I switched to a 40Mhz clock fpga, > > I still have no idea why making the loop iterate 10 times vs 9 would > > result in such catastrophic failure. > > Maybe the failure is due to increasing the clock frequency. > What does static timing say about Fmax? > > -- Mike Treseler Ok here's the current status. The CRC error seems to be some kind of switch in the iMPACT download facility, when I load directly from the Boundary Scan in ISE10.1, I don't get the CRC error. However I still lock up the FPGA. I dumped the offending code, re-wrote it completely, and the problem went away... so I thought. I figured the way I was handling the timing of the signal was the issue, and resolved myself with the idea that my redo of the transmit routine avoided whatever issue I was having. so now I move on, and I take my output signal (a 2mhz digital signal) and decide to repeat its output on a new pin; so I add a new pin to the .UCF file, add the label to the port, and attach my 2mhz signal to the new pin, and guess what? I lock up my FPGA again. Something very weird is going on. I started a new thread where there are more details on this issue. look for the title: " fpga locks up with slow signal, spartan chip, pin type issues."
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z