Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Is there anybody who would have experimented liquid nitrogen cooling with Overclocked FPGAs ? BouhArticle: 52126
Austin Franklin wrote: > Don't take this the wrong way, but most programmers I know who try to design > hardware don't really do it very well, because of lack of understanding and > ability in the hardware area. Because you may be a very good C programmer > doesn't mean you have the mindset and skill necessary to design hardware > with an HDL, simply because it is a "programming" language. It's just a > tool that merely allows for the implementation of concepts that aren't > generally used in common C programming. I find that I can write a Verilog compiler, and argue with the best as to what a Verilog program is supposed to do, but I have great difficulty actually writing a Verilog program; and it's not for a lack of knowledge of Verilog, or knowledge hardware, since I wrote a Verilog simulator and synthesizer that seems to have legs. I know something of Verilog and a bit about hardware. But of course when faced with a requirement and a blank FPGA, none of that means a darned thing. Like most software engineers trying to cross over, I lack experience that is often reflected in coding style or the ability to cut to the chase. Just because Verilog is a programming language, software engineers should not expect to be able to produce even the most basic working design without spending some time making lots of stupid mistakes. It's a lesson in humility, and the curriculum spans some years. This is surely true of any HDL. As for the (long forgotten) point of this thread, I will surprise some by suggesting that even C programmers might find schematics the most instructive introduction to hardware design. One quickly tires of the frustrating incompatibility of schematic editors, but the schematic drawing, even on paper, makes for a good mental image of hardware which is fundamentally *not* imperative/sequential/software. -- Steve Williams "The woods are lovely, dark and deep. steve at icarus.com But I have promises to keep, steve at picturel.com and lines to code before I sleep, http://www.picturel.com And lines to code before I sleep."Article: 52128
"David" <gretzteam@hotmail.com> schrieb im Newsbeitrag news:it__9.55443$Cc6.312451@wagner.videotron.net... > Hi, > I have a digital signal representing a sinewave of amplitude 0.5. The signal > has 32 bits with 16 bits before the decimal point and 16 bits after. > Obviously, modelsim doesn't care about the decimal point so when I choose > 'Analog' for the signal's format in the waveform, the sine wave doesn't fit > on my monitor since modelsim thinks it has a very large amplitude...but all > the 1's are in fact after the decimal point...Is there a way to go around > this problem? Sure. Set the mutiplication factor to a low value. -- MfG FalkArticle: 52129
Paulo Valentim <prv3299@yahoo.com> wrote: > The problem is that I want to know the relationship between the 8 > differential clocks (from the 16 pads) and their availability. Are > they by quadrants or are they global?? If they are by quadrants, which > quadrant are they available in? It looks like the differential clocks > from BUFGDS are global but it's not really clear. > > Another question is that I don't know how to pair them off. > > For example which is the differential pair of GCLK1P. Is it GCLK1S or > GCLK0S(based on the IO differential pair)? Each quadrant is allowed 8 clocks. A particular quadrant can have the clock from BUFGMUX0P or BUFGMUX0S, but not both. As well it can have 1P or 1S, but not both, 2P/2S, 3P/3S etc. If you use all the global clock pins (8 at the top, 8 at the bottom) you will have 4 clocks on the P buffers and 4 on the S buffers and you will have access to every clock in every quadrant. The fact that your clocks are differential is not important. It only means that they take up twice as many pins; it has no effect on the BUFG or DCM usage. (Note to Xilinx: 16 differential clocks would be nice - differential clocks chew up the global clock pins too quickly. On my current project, we had to put one clock on general I/O (again differential) because we used up all the real clock pins with 8 other differential clocks.) Hamish -- Hamish Moffatt VK3SB <hamish@debian.org> <hamish@cloud.net.au>Article: 52130
Hey Is there way to program Altera EPC1213 with byteblaster cable or do I need some external prommer to do that. - Jarmo jarmoma@mail.student.oulu.fiArticle: 52131
Bouh wrote: > Is there anybody who would have experimented liquid nitrogen cooling > with Overclocked FPGAs ? The current internal clocks are in the range of 400MHz and the LVDS channels do 1200M bits per second with even faster families arriving almost every quarter. Yes, we also await the SiGe FPGAs an order of magnitude faster. What is your targetted speed then ? Rene -- Ing.Buero R.Tschaggelar - http://www.ibrtses.com & commercial newsgroups - http://www.talkto.netArticle: 52132
Jarmo napisal(a): >Is there way to program Altera EPC1213 with byteblaster cable or do I need >some external prommer to do that. External programmer. -- Pozdrowienia, Marcin E. Hamerla "Płoń, płoń, płoń parlamencie, spali Cię ogień na historii zakręcie."Article: 52133
On Sun, 02 Feb 2003 15:00:39 +0100, Rene Tschaggelar <tschaggelar@dplanet.ch> wrote: >Bouh wrote: >> Is there anybody who would have experimented liquid nitrogen cooling >> with Overclocked FPGAs ? > > >The current internal clocks are in the range of 400MHz and the LVDS >channels do 1200M bits per second with even faster families arriving >almost every quarter. >Yes, we also await the SiGe FPGAs an order of magnitude faster. >What is your targetted speed then ? > >Rene Hi Rene, Yup I know... but SiGe tech is not available to us yet :/ We need to reach the maximum speed for extremely short periods of time. BouhArticle: 52134
The commercially available devices do not operate in that range - sorry. Your on your own if try this but I doubt it would work given the type of fab process used. SiGe is expensive to produceand power hungry- don't hold your breath waiting for for this either. "Bouh" <NoMail@Gggugguugugggg2.org> wrote in message news:26vo3v4e8trvln1pb4ot9vpb9vsseoj9hn@4ax.com... > Is there anybody who would have experimented liquid nitrogen cooling > with Overclocked FPGAs ? > > BouhArticle: 52135
When I have to use manual encoding, I usually include a process that has a case statement that has uses an enumerated type to display a sensible name when a given numeric value is present in the real state coding. type display_states is (one, two, three) signal display_state : display_states; o o o process (one_hot_code) begin case one_hot_code is when "001" => display_state <= one; when "010" => display_state <= two; when "100" => display_state <= three; end case; end process; Every synthesizer I have done this with figures out the logic is redundant, and builds no gates. Clyde Alain wrote: > Thanks for all answer. > Manual encoding is a solution but enumerate type is very useful in > simulation where you see clearly your state and not an numeric valuer > without sense (we use an high level description language !). Moreover > when you have an existing design (and even several) with enumerate > type in many and many FSM is very costly to modify a "correct" code > because of a lack of the synthetiser. > I discover recently that Mentor Graphic as this safe mode too (see > http://www.iec.org/online/tutorials/safe_state/) > I hope that Xilinx engineers will offer soon this important feature > (and hope they hear me !). > > chopra_vikram@excite.com (Vikram) wrote in message news:<b6a2818d.0301281156.5eda1be2@posting.google.com>... > > Mike Treseler <mike.treseler@flukenetworks.com> wrote in message news:<3E357B74.2090208@flukenetworks.com>... > > > Alain wrote: > > > > How specify with XST synthetiser (Xilinx) to encode FSM in "safest > > > > way", . . . > > > > I previously use FpgaExpress and it had this option... > > > > > > > > > If XST has no "safe" setting for one-hot, consider using > > > binary encoding as it is inherently safe. > > > > > > -- Mike Treseler > > > > A google search on - XST one-hot comes up with the foll. links - > > > > http://toolbox.xilinx.com/docsan/xilinx4/data/docs/cgd/f6.html > > > > http://www.xilinx.com/xlnx/xil_ans_display.jsp?BV_SessionID=@@@@0361207789.1043783564@@@@&BV_EngineID=ccccadchggigjemcflgcefldfgldgji.0&getPagePath=15708 > > > > Hope this helps, > > Vikram.Article: 52136
I'm guessing you won't find "documentation" anywhere that gives you the explicit answers. The implicit answers can be obtained by using the Xilinx fpga_editor tool to look at the routing boxes. You can clock on a single "bubble" at the connection point to the box and see the signals the node drives and the signals that drive the node; sometimes it's not obvious when the connection is bidirectional but if you get the same color coding between the nodes independent of which end's bubble you click on, it's bidirectional. It's only with the fpga_editor that you can get a true appreciation for what can be connected to what. Good luck. "Jing" <hjing@ece.neu.edu> wrote in message news:c4b9775f.0302010836.18869eb0@posting.google.com... > Hi, I'm trying to find the detailed achitecture of the interconnect > in Virtex device, but in the datasheet I only find a general > decription of it. Especially about the GRM(General Routing Matrix) > used in Virtex. I'd like to know what is the detailed structure > of GRM, like which switch can connect to which switch. I was > unable to find this information on your datasheets. > I also want to know the detailed interconnect structure of > VirtexII, in perticular the structure of the switch matrix > that connects the vertical routing channels and the horizontal > routing channels.Article: 52137
Before synthesis the timing tool has no idea what your logic will look like. Before implementation the timing tool knows the logic but doesn't know the routing resources or placement information that give you delays between your pads, registers, latches and RAM elements. Once implementation is complete, the static timing analysis can provide accurate values for your delays. "Skillwood" <skillwoodNOSPAM@hotmail.com> wrote in message news:b1fj7l$12qoi7$1@ID-159866.news.dfncis.de... > Hi, > Can Anyone tell me why static timing analysis is done after synthesis and > implementation ? > > Regards, > SkillieArticle: 52138
Hi Goran, Goran Bilski wrote: > John Williams wrote: >>Yes that's what I thought. What happens to interrupts when the BIP flag >>is set? > > When the BIP is set, interrupt are disabled. > In your system call, you can then take the decision to reenable them again by > clearing the BIP bit or > your can leave it and have interrupt protection if you have more of a kernel > functionality. > The BIP bit is also blocking the external break signal EXT_BRK but it will NOT block > the nonmaskable external break signal EXT_NM_BRK. Hmm, that's a bit inconvenient. I was hoping to use the BIP bit as an ersatz "supervisor mode" flag - indicating that the current execution context is inside the kernel. However, interrupts must still be enabled at this time. Oh well, I'll work something out. Microblaze being a softcore procssor, in principle I should be able to change that behaviour without difficulty? :) >>Regarding my original question though - what conditions cause an >>exception to be generated and control thrown to the exception handler at >>(I think) 0x10? For instance, does the software library thow some kind >>of exception on divide by zero? If so, how is this exception triggered? > As stated in the documentation there is currently no exceptions generated in > MicroBlaze. > That will probably change as more feature are added to MicroBlaze. I can live with that :) Even 'trap /address/' would be useful - the exception code could be placed in a register by the function initiating the exception. In the meantime I'm emulating software exceptions with the BRK / RTBD instructions. > Have you looked at the kernel that we ship with the EDK? > It's has most of all the kernel functionality's and are shipped as source code with > no royalties. <gripe> Weeks later, I'm still waiting for the EDK upgrade to arrive </gripe> That aside, I have looked at the doco for xilKernel or whatever it's called. We've made a strategic decision to try the "full" OS approach. In particular, a fairly lightweight version of linux called uClinux, modded to work on processors without an MMU. At first I was also skeptical, prefering "by intuition" a lightweight micro kernel like xilKernel. However, I was swayed by arguments similar to the following: - the huge base level of system support from linux - flash disk drivers, networking support, etc etc. Once you get NFS up and running you just mount the host dev directory and new binaries are immediately available to execute and test on the Microblaze target. No reconfiguring, no flash updating, nothing. Talk about speeding application development time. Native GDB support won't hurt one little bit either. - the ability to patch across to RTLinux, a lightweight scheduling kernel that runs linux as a low priority usermode thread. This basically turns the operating system upside down, so real-time threads interrupt the OS to request services, notify about events and so on. - selectively exclude modules / components that are not required. Not sure of the abs. minimum uClinux footprint but it's well under 1 meg (vanilla 80x86 Linux can boot and run from a single floopy). - uClinux has been succesfully ported and used on other lightweight embedded risc processors, like the small ARM devices and so on. A NIOS port has also been reported, but I don't know a lot about it. - microblaze symmetric multiprocessing (SMP) anyone? OS support is already right there in the kernel. distributed interrupt handling, etc etc. - a familiar dev. environment. This is particularly important for us as we are developing a research/prototyping platform. Our experience is that with Microblaze as-is, the learning curve is too steep and postgrad students spend the first 12 months of their PhD just trying to wrap their heads around the architecture and infrastructure, before they can even consider doing anything useful with it. I don't know if other microblaze developers are having the same problems, but some messages posted to the embedded processor forum at xilinx.com suggest this may be the case. Please note this is not a criticism of microblaze or xilinx at all, I think microblaze is a very exciting and fascinating bit of (virtual) hardware - we are just co-opting it to suit our needs. If it meets some others' needs as well, then even better. I'll have to check with my masters, but the intention is to release the microblaze uClinux port under GPL. I daresay that once we are done (or even before, hint hint :), Xilinx might be interested in the results, as it will open up some very interesting possibilities. Cheers, JohnArticle: 52139
Can smbdy please point me to a link or just give and estimate how much does the stuff like :rake receiver, 8051 core, costs? regards TIArticle: 52140
I advocate any one attempting to learn hardware start with schematics. Attempting to learn both hardware and how to make a text tool describe that hardware at the same time is a monumental task. Most of the good hardware designers I know still wind up sketching things out as a schematic first, at least in their minds, and often on the back of a napkin, envelope, sales slip etc. Stephen Williams, who is a software person, wrote: <<Lots of good observations snipped>> > As for the (long forgotten) point of this thread, I will surprise > some by suggesting that even C programmers might find schematics the > most instructive introduction to hardware design. One quickly tires > of the frustrating incompatibility of schematic editors, but the > schematic drawing, even on paper, makes for a good mental image of > hardware which is fundamentally *not* imperative/sequential/software. > > -- > Steve Williams "The woods are lovely, dark and deep. > steve at icarus.com But I have promises to keep, > steve at picturel.com and lines to code before I sleep, > http://www.picturel.com And lines to code before I sleep." -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 52141
Bouh <NoMail@Gggugguugugggg2.org> wrote in message news:<iefq3vcui8oka66a8s4jndigtnleh1o4fp@4ax.com>... > On Sun, 02 Feb 2003 15:00:39 +0100, Rene Tschaggelar > <tschaggelar@dplanet.ch> wrote: > > >Bouh wrote: > >> Is there anybody who would have experimented liquid nitrogen cooling > >> with Overclocked FPGAs ? > > > > > >The current internal clocks are in the range of 400MHz and the LVDS > >channels do 1200M bits per second with even faster families arriving > >almost every quarter. > >Yes, we also await the SiGe FPGAs an order of magnitude faster. > >What is your targetted speed then ? > > > >Rene > > Hi Rene, > > Yup I know... but SiGe tech is not available to us yet :/ > > We need to reach the maximum speed for extremely short periods of > time. > > Bouh One crazy idea would be to modulate the voltage, higher voltage would give some extra speed boost, but would also stress the chip. If the speed is only needed at very low duty cycles then the stress would be minimized. Think how EProms work here, integrating charge over time & managing the stress of HV programming. This would likely require experimentation & I doubt Vcore could go more than 50% over nom. These stress efects do change with temperature, so deep cooling may enhance the degradation as well as the speed. Also peltiers, heat tubes, water cooling systems are quite prevalent now for overclocked PCs but I never heard of nitro cooling except when Intel talks of 4GHz P4 tests.Article: 52142
"John_H" <johnhandwork@mail.com> wrote in message news:<MZg%9.8$pK6.2747@news-west.eli.net>... > Before synthesis the timing tool has no idea what your logic will look like. > > Before implementation the timing tool knows the logic but doesn't know the > routing resources or placement information that give you delays between your > pads, registers, latches and RAM elements. > > Once implementation is complete, the static timing analysis can provide > accurate values for your delays. Differences between pre and post PD static timing can be like night and day sometimes--especially if you have scenic routes which demand buffer insertion, improperly specified caplims across separately synthesized units, etc. -tArticle: 52143
Thanks for the comments. I wonder if setting the IO pins for PCI would work better than TTL? Bob Fischer Ernest Jamro <jamro@agh.edu.pl> wrote in message news:<3E39483A.3090702@agh.edu.pl>... > I have done a similar project and it does not work on > every computer. > > 1. EPP signals are often not TTL compatible on some computers in EPP mode, > 2. PCs motherboard chip set (or PP cable) does not work correctly on > some PCs (e.g. when the transferred data change from 0x00 to 0xFF > the EPP data_stobeN goes high even when waitN signal is still low) > 3. some PC does not support EPP mode at all. > 4. Transfer depends strongly on a PC you've got. > > You need not bother about the DMA - use a standard memory transfer > unless your PC must do some other critical calculation. > > Good luck anyway > > Ernest Jamro > > Bob Fischer wrote: > > I will be testing an FPGA design that is intended to drive a PC for > > initial checkout and later to an embedded computer using parallel > > port. I selected the EPP protocol as it looks like it can support > > what I need to do. > > > > The FPGA will output 10 bytes of data to the PC each cycle of > > operation. The data consists of five 14 bit values output in two > > bytes each. The FPGA will be performing about 40,000 cycles per > > second. Think of each cycle as a 25 us frame. Data collection (about > > 4 us), processing (about 7-8 us) occurs for the first 11-12 us of each > > frame. When the data is ready the parallel port Interrupt line is > > asserted. > > > > The burst rate during the available 13 us data output portion is > > around 770 Khz. The times have already been verified in the > > simulations. For the simulation I used an 800 ns byte cycle. The > > testbench emulates the PC by responding to the Interrupt, invoking the > > byte cycle timing as expected from the PC by cycling the Data Strobe > > line (400 ns low then 400 ns high for each byte). The FPGA responds > > with Waits and presentation of data bytes at the time defined for the > > EPP port. I used the timing found in web site > > www.beyondlogic.org/epp/epp.htm > > > > The output of the FPGA is configured for TTL levels, slow transitions. > > I intend to pipe the FPGA directly to the DB connector and through a > > 3 ft parallel cable to the PC parallel port. > > > > In the PC we will DMA the data to memory and accumulate it for several > > seconds. A display program will access that memory and generate > > graphs, etc for visual analysis of the performance and results. > > > > Does this approach to PC interfaceing sound feasible? Has anyone out > > there any prior experience they would like to share? Some Do's and > > Don'ts? > > > > Bob Fischer > > FPGA independent designerArticle: 52144
>for my particular design I need 7 clocks. Four of them are fixed for those I >use the >four global clock inputs. I would like to use three additional clocks for >decoding >three serial synchronous lines. Each line has a clock and a serial data >line. Now >the question: > >Is it possible to feed this three pairs in the chip and to use their clock >signals very >locally in the design to drive internal serial to parallel converters? For >that purpose I would >need to clock some CLB's from other than the four global clock lines. I'am >using >VHDL. How do I constraint the design that this will work, if it is possible? What speed are the serial clocks? If they are slow enough, you can make a simple state machine that runs on the main clock and watches for clock edges and drives the serial-parallel converter. Don't forget metastability. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 52145
> The signal quality from the parallel port is often not really good, so using > STROBE as a clock for a FPGA is a big NO-NO, the slow edges, ringing and > reflections will cause doule clocking -> mess up of state machines. Thanks! I will consider this. But what about input lines (Ack, Busy, PE...)??? Can a Spartan output (LVTTL) drive these signals without problems? Thanks again!Article: 52146
The tool won't know where the placed components are or which routing paths, switch matrices, etc were used so it needs to have all that knowledge in order to figure out what the actual route delay was. "Skillwood" <skillwoodNOSPAM@hotmail.com> wrote in message news:b1fj7l$12qoi7$1@ID-159866.news.dfncis.de... > Hi, > Can Anyone tell me why static timing analysis is done after synthesis and > implementation ? > > Regards, > Skillie > >Article: 52147
Hello Friends/Sir, I wanted to know what is the difference between PCI rev 2.1 and PCI rev. 2.2. Waiting for Reply Thanks in advance PraveenArticle: 52148
It depends on the FSM (especially input/output) logic. The best result is obtained when you divide your FSM into 2 or more partially independent states - I think the number of states can be reduced by introducing a counter and the counter output might be the input to the FSM - it significantly reduced the number of FSM and event the amount of typing - It is rather difficult to type 192 states without making an error!!!!!!!!!!!!!!!!!!!!! The choice between one hot and binary encoding should be also with respect to the output function. E.g. when you have about a hounded of outputs and each of then is 1 for only a single state - one hot is a good choice. Nevertheless if the outputs are '1' for roughly a half of all states it might be better to use the binary encoding. As the output logic might inquire on average 192/2 input OR gate!!!!! It should be noted that if you have 192 states - the logic optimizer will significantly reduce the number od states - often to less then 50. Ernest Jamro zhengyu wrote: > Suppose I want to use one-hot encoding to implement a FSM, does it make > sense if the > register length is 192 (very long)? > >Article: 52149
praveen <praveenkumar1979@rediffmail.com> wrote: : Hello Friends/Sir, : I wanted to know what is the difference between PCI rev 2.1 and PCI rev. 2.2. What did you do yourself to learn about the differences? -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z