Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi all, When studying SystemVerilog, some book said 'virtual' is used to get the reference of the interface. But why the language not use 'ref' to get interface reference? And what's 'virtual interface' actually mean? Thanks! For example, class eth_mii_mac; local virtual eth_mii_if sigs; task new (virtual eth_mii_if sigs); this.sigs = sigs; endtask: new ... ... endclass: eth_mii_mac Best regards, DavyArticle: 111101
Hey, I'm using a XC3S500E-FG320 and let the PAR choose the clk pins itself with the right IOSTANARDS, then i locked the clk pins so i know the setup should work in that configuration (because i need a lot of CLK BUFG's i thought it was the best idea to check out if it will work before i let them route the PCB) ... but then when i lock those pins (it choose itself) i get the a warning: WARNING:Place:619 - This design either uses more than 8 clock buffers or uses more than 4 DCMs or has clock buffers locked to side-BUFG sites or has DCMs locked to side-DCM sites. The side-DCMs can drive only the BUFGs on the same side. Since side-BUFGs can drive only their half of the device and also exclude a global-BUFG from entering a clock region, it is necessary to partition the clock logic being driven by these clocks into different clock regions. It may be possible through Floorplanning all or just part of the logic being driven by the global clocks to achieve a legal placement for this design why does it give a warning when i lock the clk pins it choose itself? these are the clk pins it choose and the bufgmux's NET "clk_xtal" LOC = "B9" | IOSTANDARD = LVTTL ; NET "pix_odck1" LOC = "P10" | IOSTANDARD = LVTTL ; NET "pix_odck2" LOC = "J17" | IOSTANDARD = LVTTL ; INST "c_input_dcm/LVSD_CLK_FB_BUFG_INST" LOC = BUFGMUX_X1Y11; INST "c_input_dcm/CLKFB_BUFG_INST" LOC = BUFGMUX_X1Y0; INST "c_input_dcm/CLK_OUT_BUF_BUFGMUX_INST" LOC = BUFGMUX_X1Y1; INST "c_input_dcm/CLKFX_BUFG_INST" LOC = BUFGMUX_X2Y1; INST "c_input_dcm/CLK_OUT_int_BUFGMUX_INST" LOC = BUFGMUX_X2Y0; INST "c_input_dcm/CLKIN2_BUFGMUX_INST" LOC = BUFGMUX_X3Y3 INST "c_input_dcm/LVDS_CLKFX_BUFG_INST" LOC = BUFGMUX_X1Y10; INST "clk_xtal_BUFGP" LOC = BUFGMUX_X2Y10; so we have one clk for one half of the device (PIX_ODCK2) and the others as global clk? what am i doing wrong? thanks in advance, kind regards, TimArticle: 111102
entanglebit@gmail.com wrote: > > I'm beginning to do some work in the efficient mapping of intensive > algorithms (high orders of complexity) into hardware. I was hoping > that some of you with similar experience may be able to suggest some > resources -- textbooks, journals, websites -- of particular interest. > Your input is greatly appreciated. Look up Sedgewicks "Algorithms" -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net>Article: 111103
Hi Antti. Thank you very much. I didi project to hardware designers and they fix problem due to contention with Intel Strata Flash. Bye, Alfmyk.Article: 111104
Ok, thank again Antti ! I didn't know and suppose in EDK Examples was present a project oriented to iterrupt driven example. Anyway I'd like to ask you: 1) In generally what are different between static ISR (ISR assigned via XPS Softaware Menu) and dynamic ISR? The only different is just the possibility to change ISR Routine is it was assigned (dinamically)? 2) Moreover in generally taking a look to EDK example I have seen that ISR has a input parameter like ( (void *) baseAddress_p ) is it always the baseAddress related to IP Peripheral or not ? Thanks in advance. Cheers, Al.Article: 111105
Brad Smallridge wrote: > Say, I was getting good results from a double edge clock > input circuit and a single DCM generating 140 MHz (40MHz > xclk). The trick was to select a shifted output depending > on the fast clock. > > I was unable, however, to use a double edge clock circuit > on the output. The OSERDES does not have a 7x option and > when you try 8x you get 8x data bits no matter how you > drive the clkdiv input. > > Do you have anysuggestions here? You already need to use an entire IO tile for each output pair, as you are using a differential IO standard. As such you have two OSERDES available for each pair. While you could use them together to get DDR mode (as you have done), I would suggest using them cascaded in SDR x7 mode. The slowest speed grade of V4 will easily support 280MHz clock, and IO, and no additional logic is required to determine pixel boundary. This will sork up to the max clock rate of the DCM, which is (IIRC) about 400MHz, or about a 57MHz pixel clock. Regards, Erik. --- Erik Widding President Birger Engineering, Inc. (mail) 100 Boylston St #1070; Boston, MA 02116 (voice) 617.695.9233 (fax) 617.695.9234 (web) http://www.birger.comArticle: 111106
Peter, you're right that youngsters (my friends for example) don't know how the computer really works despite the fact that they use it almost every day. This secret is revealed only to the most curious youngsters that devote their time to it. I am sure that there are still youngsters which are willing to understand the secrets of a silicon brain. Now in the in age of internet information is easily available to anyone connected, may it be a 6 years old kid or a 100 grandpa (no offence Peter). The people with knowledge should be willing to give their knowledge to the masses (p.e. publish it on the net), there will always be someone that will accept it. The problem is that a life is too short to explore all the interesting things. When you explore things we usually begin at the surface of the problem. Then you remove it layer by layer, like peeling an onion. What to do when there are too many layers. Computer technology gains many new layers every year (an exponential number according to moore's law). I think that nobody can keep the pace with this layers. So at some point you give up and study only the things you prefer. Youngsters are familiar with games, so some learn how to make one. The others prefer the secrets of operating system - they build OS. Some of them are interested in HW - like most of us in this newsgroup. I for example am a very curious mechanical engineer. When I got bored in mechanical engineering, where the pace of development is nothing in comparison to electronic industry, I also studied electronics. Now I prefer electronics for a simple reason - it is far more complex hence gives me much more satisfaction when learning. Sometimes I realise that I am very weak in fundamentals, because I am missing the lectures in fundamentals of electronics. To be honest I don't have a clue what a "FF metastability" is and what is the cause of it. BTW: What is an FF? I imagine it like a memory cell. Despite my poor knowledge in fundamentals I am able to build very complex computer systems, write software for it... How that can be? Well some people are devoted to fundamentals, some to layer above that, the others to layer above that layer and so on. At the end there are "normal" computer users that do not want to know how the computer works, they just want to use it for Word, games, watching movies... I wouldn't worry about the passing the knowledge to youngsters. If there will be a need for that knowledge they will learn it. So specific knowledge is learned by a small group of people but the Word usage and Email sending is learned by almost every youngster (in the Western world!). That's an evolution. Cheers, Guru Peter Alfke wrote: > There is a difference, 60 years ago, a curious kid could at least try > to understand the world around him/her. > Clocks, carburators, telephones, radios, typewriters, etc. > Nowadays, these functions are black boxes that few people really > understand, let alone are able to repair. > Youngsters today can breathe life into a pc by hitting buttons in > mysterious sequences... > Do they really understand what they are doing or what's going on? > "If the engine stalls, roll down the window" :-) > > Here is a simple test, flunked by many engineers: > How can everybody smoothely adjust the heat of an electric stove, or a > steam iron ? > Hint: It is super-cheap, no Variac, no electronics. Smoke and mirrors? > Answer: it's slow pulse-width modulation, controlled by a self-heating > bimetal strip. > Cost: pennies... > > Well, the older generation has bemoaned the superficiality of the > younger generation, > ever since Socrates did so, a hundred generations ago. Maybe there is > hope... > Peter Alfke > > > On Oct 28, 3:24 pm, PeteS <peter.smith8...@ntlworld.com> wrote: > > Peter Alfke wrote: > > > We had a discussion at lunch, about the future when us dinosaurs are > > > gone. > > > Who will then understand those subtleties, only the tiny cadre of IC > > > designers? > > > Many new college graduates' eyes glaze over when I ask them about the > > > way a flip-flop works, and how it avoids a race condition in a shift > > > register. And clock skew and hold-time issues. > > > Hard-earned "wisdom"... > > > Peter Alfke > > > > > On Oct 28, 12:02 pm, PeteS <peter.smith8...@ntlworld.com> wrote: > > > > >>Peter Alfke wrote: > > > > >>>Well, in the beginning of my professional life, I built flip-flops out > > >>>of two Ge transistors, 8 resistors, two diodes and two capacitors. > > >>>Remember, the term J-K flip-flop comes from a standardized sinle-FF > > >>>pc-board where the connector oins were labeled A-Z, and the set and > > >>>reset inputs were on the adjacent central pins J and K. > > >>>Not a joke... > > >>>Peter Alfke > > > > >>>On Oct 27, 4:53 pm, Jim Granville <no.s...@designtools.maps.co.nz> > > >>>wrote: > > > > >>>>PeteS wrote: > > > > >>>>>Jim Granville wrote: > > > > >>>>>>PeteS wrote: > > > > >>>>>>>I have had *true* metastable problems (where an output would float, > > >>>>>>>hover, oscillate and eventually settle after some 10s of > > >>>>>>>*milliseconds*), but those I have seen recently don't qualify :) > > > > >>>>>>Can you clarify the device/process/circumstances ? > > > > >>>>>>-jg > > > > >>>>>This was a discrete design with FETs that I was asked to test (at a > > >>>>>customer site). The feedback loop was not particularly well done, so > > >>>>>when metastability did occur, it was spectacular.Do you mean they built a D-FF, using discrete FETS ?! > > > > >>>>I have seen transistion oscillations (slow edges) cause very strange > > >>>>effects in Digital Devices, but I'd not call that effect metastability. > > > > >>>>-jgAmusing > > > > >>I too have made flip flops from discrete parts in the distant past. The > > >>metastable problem I encountered was due to slow rising inputs on pure > > >>CMOS (a well known issue) and was indeed part of the feedback path. > > > > >>I remember making a D FF using discrete parts only a few years ago > > >>because it had to operate at up to 30VDC. I had to put all the usual > > >>warnings on the schematic page about setup/hold times etc. > > > > >>There are times when the knowledge of just what a FF (be it JK, D or > > >>M/S) is comes in _real_ handy. > > > > >>Cheers > > > > >>PeteSThere was a TV show perhaps 20 years ago the name of which I do not > > remember. In it, the computer that ran the spacecraft (named Mentor > > because the female of the group had thought the name) refused to give > > information about using the transporter system. > > > > It said 'Wisdom is earned, not given' > > > > Cheers > > > > PeteSArticle: 111107
I am having about the same experience. I upgraded my computer to EDK/ISE 8.2 1. I upgraded my GSRD2 (PPC, MPMC2, TEMAC) design to v8.2. There were actually no problems detected until I used the flashwriter.tcl script. I does not work at all. OK, minor thing to remedy. 2. I upgraded my PPC CoreConnect project to 8.2. First the XMD connect failure (wrong processor version 0x00000000). Then I connected secesfully using "connect ppc hw", but the design does not work at all - probably some hw error. The sw crashes when setting interrupt controller (working nice in 8.1). Fu.. that 8.2 and hope for the SP2 to remedy the problems. Cheers, Guru Zara wrote: > On 27 Oct 2006 00:18:43 -0700, "Antti" <Antti.Lukats@xilant.com> > wrote: > > >Zara schrieb: > > > >> I quit. > >> > >> A brand new project, with only a dcm, a debug module, 16K ram, an uart > >> and intc will not work (problems writing BRAM). > >> > >> Will continue using 8.1 for a long time, I suspect > > > >quit? your life? > > > >EDK 8.2 does work > >I have done maybe over 50 EDK systems with 8.2 > >- all of them work > > > >if your system doesnt there is something wrong > >and you should figure out what it is, not quitting. > > > >Antti > > > No, it was a figure of speech. Whta I really mean is that I will stop > investigation while I am completing the current phase of my project. > As soon as some 40something boarsd are delivered; I will resume > trying. > > As a sapnish adage goes "Last to be lost is Hope" ;-) > > Regards, > > ZaraArticle: 111108
Guru wrote: > Then you remove it layer by layer, like peeling an onion. What > to do when there are too many layers. Computer technology gains many > new layers every year (an exponential number according to moore's law). Moore's Law says that the transistor density of integrated circuits, doubles every 24 months: http://en.wikipedia.org/wiki/Moore%27s_Law I think layers are increased more linear, maybe one every 5 years, like when upgrading from DOS with direct hardware access to Windows, with an intermediate layer for abstracting the hardware access, or from Intel 8086 to Intel 386, with many virtual 8086. So you can keep the pace with the layers. You don't need to be an expert for every layer, but it is easy to learn the basics about which layers exists, what they are doing and how they interact with other layers. It is more difficult to keep the pace with all the new components, like PCIe, new WiFi standards etc., but usually they don't change the layers or introduce new concepts. If PCs would be built with FPGAs instead of CPUs, and if you start a game, which reconfigures a part of a FPGA at runtime to implement special 3D shading algorithms in hardware, this would change many concepts, because then you don't need to buy a new graphics card, but you can install a new IP core to enhance the functionality and speed of your graphics subsystem. If it is too slow, just plugin some more FPGAs and the power is available for higher performance graphics, but when you need OCR, the same FPGAs could be reconfigured with neuronal net algorithms to do this at high speed. There are already some serious applications to use the computational power of shader engines of graphics cards, but most of the time they are idle, when you are not playing games. Implementing an optimized CPU in FPGAs for the current task would be much better. -- Frank Buss, fb@frank-buss.de http://www.frank-buss.de, http://www.it4-systems.deArticle: 111109
On Sat, 28 Oct 2006 08:18:51 +1300, Jim Granville <no.spam@designtools.maps.co.nz> wrote: >So this requires you generate all outputs ? Only if you actually need to test them. If a given vector file wants to check a specific port, but that port is a don't care for most of the vectors, then you just enter a '-' for that port in the expected outputs. You could also specify the expected outputs algorithmically, if possible: you can declare variables and do arithmetic on them, and use them as module inputs or test values. >If the sim does fail, what format is the failure report, and >how does the user relate that back to a given line of test source code ? Here's an example from a chip I've just done. Line 392 of one of the vector files looks like (note that I've put in a line break to try to avoid wrapping problems on your newsreader): [11C 1 0011 0 5A5A5A5A xxxxxxx xxxxxxx zzzzzzzz zzzzzzzz] -> [-0 -------- ---- EE FF 011 5A5A5A5A 011 5A5A5A5A ------- -- --] If I change the last '5A5A5A5A' on the expected outputs to '5B5A5A5A', then ModelSim reports: # ** Note: L2HDB error: line 392 (expected 0x5b5a5a5a, got 0x5a5a5a5a) # Time: 4381 ns Iteration: 0 Instance: /testbench # ** Note: simulation complete: 305 out of 306 vectors passed. # Time: 7007400 ps Iteration: 0 Instance: /testbench The pin's identified, the line of the vector file, and the time. >Interesting idea. > What about doing a simple VHDL/Verilog/Demo-C version that is free, and >a smarter version that is $$ ? > > A problem with this type of tool, is explaining to each user how >its feature set can offer more to a given task, so the free/simple >versions ( and good examples) do that. Hmmm... maybe a 'linear' version (ie. no smarts) for free, and looping/variables/procedures/macros etc. for $$? My recollection is that ABEL just gave you the dumb version, but it seemed useful at the time. Yes, explaining that this is (hopefully) useful is difficult. I went through this with a friend recently, who never writes testbenches, and who spent years on ABEL. He wasn't impressed... :( > For those who want it, I think most? FPGA vendors have (free) waveform >entry for simulation entry, so that is one competition point. I've never got this. Does anyone actually do that? > There is also this work by Jan Decaluwe, that uses Python for >both verilog generation, and testbench generation. >http://myhdl.jandecaluwe.com/doku.php/start >http://myhdl.jandecaluwe.com/doku.php/cookbook:stopwatch Yes, I know of Jan's stuff, but hadn't realised he also does TB generation - I'll have a look at that. EvanArticle: 111110
On 27 Oct 2006 15:53:42 -0700, "Andy Peters" <Bassman59a@yahoo.com> wrote: >This sort of test-vector entry is basically unmaintainable, and to be >honest, I haven't written this type of go/no-go test in ages (maybe >since the last time I used ABEL). > >I prefer to write test benches at a higher level. I like to have >models of the things to which the DUT connects drive the DUT. For >example, if my FPGA is a NAND flash controller that sits between a >microprocessor bus and the NAND flash devices, I'll use a >bus-functional model of the micro and the flash, and my test bench will >mostly consist of MicroWriteBus() and MicroReadBus() "commands." > >I can't go back to the simple drive-a-vector test. Yes, makes sense; I've got no problem with high-level testbenches. If a device or module has lots of buried state, and it's not obvious when any outputs or observable points should change as a result of an input change, or even *what* that change should be, then a test vector approach is pretty useless. If your problem is, for example, apply an opcode to a CPU and check that a memory location changes at some arbitrary later time, then you need to be a lot smarter. But, horses for courses. Consider: - Your complex DUT is probably built of simpler modules. If a module was just an FSM, a register block, a decoder, or a bus interface, for example, and you could test the module simply by writing 20/50/100/whatever vectors, would it be worthwhile? Or do you just stitch the whole thing together and test the top level? - Many complete chips are surprisingly simple (seen from the outside, anyway). I first used this code for a (large) DSP chain for baseband processing. I got MATLAB data for the inputs, with expected data for the outputs. Once I'd automated the test, verification was trivial. Seven years on, and I've just finished part of a Spartan-3. Nothing fancy - 25K marketing-gates - it just translates accesses between two different buses connected to 15 external chips, handles bursts, has a register block with control and status ops, does some address mapping between the various chips, and so on. I did a basic test in a day by writing 200-odd vectors, and I had a complete-ish test of 752 vectors with a bit more work. Whatever way I'd tested it, these 752 vectors would have existed somewhere, if only implicitly in the VHDL code, or even in the documentation. Why not make the vectors themselves the test? The rest of it is just unnecessary verbiage that you have to repeat again, and again, for each new module and device you do. - What are MicroWriteBus() and MicroReadBus()? I can do macros and pass parameters to the macros; you can call the macros from wherever you want in the vector file. I can also do basic C-like control structures - looping, branching based on tested values, and so on. Ok, is it worth any more than $0 now? :) EvanArticle: 111111
On Fri, 27 Oct 2006 15:39:39 GMT, "Hans" <hans64@ht-lab.com> wrote: >No since I don't understand the advantages of your product, also changing PE >to SE or adding a SystemC license to PE is not particular cheap and it might >be more cost effective to spend some more time on your testbench :-) Ok, here's a really trivial example. You've just coded a D-type F/F - how do you test it? - Option 1: code a testbench module, instantiate your D-type, provide a clock and some way to apply inputs (in this case, probably just explicitly driving the pins sequentially as required), check that the output is as required, inform the user that the test passed or failed. If it failed, tell the user where. Not difficult, but tedious. Also much more tedious if you need to do a timing sim, and you have to ensure that the inputs are driven only for the required setup and hold times, and the outputs only change between tCOmin and tCOmax. It's also not productive; this is all code you've written a thousand times before. And it may have a bug in it, and the testbench itself must be tested before you can use it to test your DUT. - Option 2: ([CLK, RST, D] -> [Q, QB]) [x, 0, x] -> [0, 1]; async reset [C, 1, 1] -> [1, 0]; clock 1 [C, 1, 0] -> [0, 1]; clock 0 As I said, it's trivial. You don't need to know any VHDL or Verilog, and you don't have to write a test program. All that tedious stuff has been done for you. Of course, You're not going to test a D-type, but there are certainly far more complex cases where this is useful. And there are more complex cases where this isn't useful; see my reply to Andy. EvanArticle: 111112
Hallo, I should develop a safety-critical application using Xilinx fpga. The system should be immune from matrix interconnect switch changes (soft-errors) during operation, otherwise, if a switch changes the interconnection, it should go into a safe state. There is a way to implement fail safe interconnections (pip), in example setting some parameters using PAR? I already read the manual, but I don't have found such information. There lots of parameters about timing, but it's not clear about safe implementation. I read also about scrubbing and readback, but I was wondering if PAR could be able to do it. It is possble to know how PAR works at low level, or it's secred and covered by patents? Many Thanks Marco T.Article: 111113
Frank, I was addressing a different issue: That the knowledge base of the fundamental technology inevitably is supported by fewer and fewer engineers, so that soon (now!) people will manipulate and use technology that they really do not understand. And that is a drastic change from 40 years ago. I think you understand German, to appreciate Goethe's words: Was du ererbt von Deinen V=E4tern hast,_ Erwirb es, um es zu besitzen! Cheers Peter On Oct 29, 10:19 am, Frank Buss <f...@frank-buss.de> wrote: > Guru wrote: > > Then you remove it layer by layer, like peeling an onion. What > > to do when there are too many layers. Computer technology gains many > > new layers every year (an exponential number according to moore's law).= Moore's Law says that the transistor density of integrated circuits, > doubles every 24 months: > > http://en.wikipedia.org/wiki/Moore%27s_Law > > I think layers are increased more linear, maybe one every 5 years, like > when upgrading from DOS with direct hardware access to Windows, with an > intermediate layer for abstracting the hardware access, or from Intel 8086 > to Intel 386, with many virtual 8086. So you can keep the pace with the > layers. You don't need to be an expert for every layer, but it is easy to > learn the basics about which layers exists, what they are doing and how > they interact with other layers. > > It is more difficult to keep the pace with all the new components, like > PCIe, new WiFi standards etc., but usually they don't change the layers or > introduce new concepts. If PCs would be built with FPGAs instead of CPUs, > and if you start a game, which reconfigures a part of a FPGA at runtime to > implement special 3D shading algorithms in hardware, this would change ma= ny > concepts, because then you don't need to buy a new graphics card, but you > can install a new IP core to enhance the functionality and speed of your > graphics subsystem. If it is too slow, just plugin some more FPGAs and the > power is available for higher performance graphics, but when you need OCR, > the same FPGAs could be reconfigured with neuronal net algorithms to do > this at high speed. > > There are already some serious applications to use the computational power > of shader engines of graphics cards, but most of the time they are idle, > when you are not playing games. Implementing an optimized CPU in FPGAs for > the current task would be much better. > > -- > Frank Buss, f...@frank-buss.dehttp://www.frank-buss.de,http://www.it4-sys= tems.deArticle: 111114
Evan Lavelle wrote: > I'm thinking about brushing this up a bit, adding Verilog support, and > flogging it for maybe 100 - 300 USD a go. To use it, you obviously > still need a simulator - the software currently produces VHDL-only > output, and uses your simulator to simulate your chip using the > auto-generated verification code. Why using a code generator when you can write such bit vector tests within VHDL itself? constant rom_test: unsigned(63 downto 0) := x"cf000000e4ec2933"; rom := rom_test; for j in 1 to 64 loop wait until ds_wire = '0'; wait until ds_wire = 'Z'; if rom(0) = '0' then ds_wire <= '0'; wait for 30 us; ds_wire <= 'Z'; wait for 1 us; end if; rom := shift_right(rom, 1); end loop; -- wait until latched wait until data_valid = '1'; -- check, if read process was successful assert rom_test(55 downto 8) = ds_wire_rom report "ROM read error" severity failure; You can refactor the for-loop to a function or procedure, if you need it multiple times. -- Frank Buss, fb@frank-buss.de http://www.frank-buss.de, http://www.it4-systems.deArticle: 111115
Peter Alfke wrote: > That the knowledge base of the fundamental technology inevitably is > supported by fewer and fewer engineers, so that soon (now!) people will > manipulate and use technology that they really do not understand. The good news: the fewer people know the basics, the more you can earn, when a customer needs it :-) -- Frank Buss, fb@frank-buss.de http://www.frank-buss.de, http://www.it4-systems.deArticle: 111116
Marco, The way to do this is to use the TMR Tool(tm), which triplicates everything, including resets, clock trees, etc. including triple voting of all feedback paths. The tool automatically takes a design, and creates a functionally correct triplicated version. Nothing will prevent soft errors (exept being 30 meters underwater). In addition to TMR, the device also requires scrubbing, or frame error correction (available on V4 and V5). That way a bit flip is corrected before another one happens. http://www.xilinx.com/support/training/abstracts/tmr-tool.htm Austin Marco T. wrote: > Hallo, > I should develop a safety-critical application using Xilinx fpga. > The system should be immune from matrix interconnect switch changes > (soft-errors) during operation, otherwise, if a switch changes the > interconnection, it should go into a safe state. > > There is a way to implement fail safe interconnections (pip), in example > setting some parameters using PAR? > > I already read the manual, but I don't have found such information. There > lots of parameters about timing, but it's not clear about safe > implementation. > > I read also about scrubbing and readback, but I was wondering if PAR could > be able to do it. > > It is possble to know how PAR works at low level, or it's secred and covered > by patents? > > > Many Thanks > Marco T. > >Article: 111117
"Peter Alfke" <alfke@sbcglobal.net> wrote in message news:1162149643.159519.289870@i42g2000cwa.googlegroups.com... > Frank, I was addressing a different issue: > That the knowledge base of the fundamental technology inevitably is > supported by fewer and fewer engineers, But with a capitalistic economy one would expect that it will be supported by the proper number of engineers and if that number is 'small' than those few will earn more money than if there were a 'lot' of those engineers. In any case, the appropriate amount of money will be spent on those functions....but of course nowhere is there a true capitalistic economy, but I expect that the knowledge/skill transfer will in fact be transferred if there still exists a market for it. > so that soon (now!) people will > manipulate and use technology that they really do not understand. > And that is a drastic change from 40 years ago. Ummm.....the true 'fundamental' knowledge underpinning all electronics as we understand it today are contained in Maxwell's equations and quantum mechanics. I'd hazard a guess that engineer's have been designing without that true knowledge of both for far longer than 40 years. What you're considering as fundamental seem to be the things things that you started your career with and were thought to be 'fundamental' back then, like how flip flops are constructed from transistors, why delays are important, bandwidths of transistors, etc. But even those things are abstractions of Maxwell and quantum....is that a 'bad' thing? History would indicate that it's not. The electronics industry has done quite well in designing many things without having to hark back to Maxwell and quantum to design something. There is the quote about seeing farther because I stood on the shoulders of giants that comes to mind. How far away one can get without a knowledge of what is 'fundamental' though is where catastrophes can happen but productivity improvements over time are driven by having this knowledge somehow 'encoded' and moving away from direct application of that fundamental knowledge so that the designers of the future do not need to understand it all....as stated earlier, there are many layers to the onion, to many to be fully grasped by someone who also needs to be economically productive to society (translation: employable). There is the real danger of not passing along the new 'fundamentals' to the next generation so that lack of knowledge of old does not result in failures of the future. What exactly the new 'fundamental' things are is subjective....but in any case they won't be truly be 'fundamental' unless it is a replacement for Maxwell's equations and the theory of quantum mechanics. KJArticle: 111118
Peter Alfke a écrit : > To paraphrase Karl Marx: > A spectre is haunting this newsgroup, the spectre of metastability. > Whenever something works unreliably, metastability gets the blame. But > the problem is usually elsewhere. Strangely enough, this newsgroup is the only place where I have seen ths aformentionned ghost in 10 years of digital design. (yeah I know Peter, these 10 years make me look like a baby, compared to your experience ;o) [...] > Rule #1: Never feed an asynchronous input into more than one > synchronizer flip-flop. Never ever. Got bitten once there. Never twice. NicolasArticle: 111119
KJ wrote: > "Peter Alfke" <alfke@sbcglobal.net> wrote in message > news:1162149643.159519.289870@i42g2000cwa.googlegroups.com... >> Frank, I was addressing a different issue: >> That the knowledge base of the fundamental technology inevitably is >> supported by fewer and fewer engineers, > But with a capitalistic economy one would expect that it will be supported > by the proper number of engineers and if that number is 'small' than those > few will earn more money than if there were a 'lot' of those engineers. In > any case, the appropriate amount of money will be spent on those > functions....but of course nowhere is there a true capitalistic economy, but > I expect that the knowledge/skill transfer will in fact be transferred if > there still exists a market for it. > >> so that soon (now!) people will >> manipulate and use technology that they really do not understand. >> And that is a drastic change from 40 years ago. > Ummm.....the true 'fundamental' knowledge underpinning all electronics as we > understand it today are contained in Maxwell's equations and quantum > mechanics. I'd hazard a guess that engineer's have been designing without > that true knowledge of both for far longer than 40 years. > > What you're considering as fundamental seem to be the things things that you > started your career with and were thought to be 'fundamental' back then, > like how flip flops are constructed from transistors, why delays are > important, bandwidths of transistors, etc. But even those things are > abstractions of Maxwell and quantum....is that a 'bad' thing? History would > indicate that it's not. The electronics industry has done quite well in > designing many things without having to hark back to Maxwell and quantum to > design something. There is the quote about seeing farther because I stood > on the shoulders of giants that comes to mind. > > How far away one can get without a knowledge of what is 'fundamental' though > is where catastrophes can happen but productivity improvements over time are > driven by having this knowledge somehow 'encoded' and moving away from > direct application of that fundamental knowledge so that the designers of > the future do not need to understand it all....as stated earlier, there are > many layers to the onion, to many to be fully grasped by someone who also > needs to be economically productive to society (translation: employable). > > There is the real danger of not passing along the new 'fundamentals' to the > next generation so that lack of knowledge of old does not result in failures > of the future. What exactly the new 'fundamental' things are is > subjective....but in any case they won't be truly be 'fundamental' unless it > is a replacement for Maxwell's equations and the theory of quantum > mechanics. > > KJ > > Well, there's fundamentals and there's fundamentals :) One I see missing is an intuitive feel for transmission lines. For years, new engineers were churned out with the mantra of 'everything's going digital and we don't need that analog crap', but when edge rates are significantly sub-microsecond everything's a transmission line. Certainly it has enhanced my employability that I learned those things both in theory and hard earned practice, but far more people need to learn these things in a world of ultra highspeed interconnects. One can not always trust to software simulations[1] quite apart from the issue of setting up a layout.[2] This is a fundamental, at least imo, and it doesn't seem to be getting the attention it deserves.[3] Other things could be cited, of course. Using a technology one does not understand is all well and good while it works. When it doesn't, the person is stumped because they don't understand the underlying principles. [1] The most amusing software bug I ever had was an updated release of Altera's HDL tools where it synthesised a SCSI controller to a single wire. That was the predecessor to Quartus and happened in '98. [2] Really highspeed systems have the layout defined for the interconnect [for best signal integrity and EMC issues] which then determines part placement, which is almost 180 out from standard layouts. [3] This is a huge growth industry for those with the requisite knowledge; see e.g. Howard Johnson et.al. They'll give a seminar at a company for a few $10k or so for a day plus printed materials. Cheers PeteSArticle: 111120
Evan Lavelle wrote: > On Sat, 28 Oct 2006 08:18:51 +1300, Jim Granville >>Interesting idea. >> What about doing a simple VHDL/Verilog/Demo-C version that is free, and >>a smarter version that is $$ ? >> >> A problem with this type of tool, is explaining to each user how >>its feature set can offer more to a given task, so the free/simple >>versions ( and good examples) do that. > > > Hmmm... maybe a 'linear' version (ie. no smarts) for free, and > looping/variables/procedures/macros etc. for $$? My recollection is > that ABEL just gave you the dumb version, but it seemed useful at the > time. ABEL is still supplied with some tool flows. CUPL is also still usefull for CPLD end of the design, and it is both free, and has some looping/macros in the functional Sim engine. CUPL is functional-sim only, so has no timing smarts itself, but you CAN take the fitter (verilog/vhdl) output, and run a testbench on that. CPLD tend to be less of a timing puzzle than FPGAs so this extra flow is not used much. What CUPL _does_ offer, is test vectors into the JED file, which is good for smaller devices that go via device programmers. CUPL also allows a '*' in the vector input, which means CUPL generates (fills-in) the H.L.Z in the output, and that can speed the test process. If you then want to lock in all test values, you can paste the SimOut values back into the SimIN file, but I tend to scan the SimOUT carefully, and then rely on the same results being generated from the same stimulus. (These are stable tools.) Here is a simple example: SimIN 0C1* ** **** *** **** 111111 ****** ****** ***** C * SimOUT 0C1L LL LHHH LHL ZZZZ 111111 LLLLLL LLLLLL LLHHH C L > > Yes, explaining that this is (hopefully) useful is difficult. I went > through this with a friend recently, who never writes testbenches, and > who spent years on ABEL. He wasn't impressed... :( > > >> For those who want it, I think most? FPGA vendors have (free) waveform >>entry for simulation entry, so that is one competition point. > > > I've never got this. Does anyone actually do that? I have a friend who thinks this is great. He does a lot of BUS interface work. Personally, I prefer using a text editor, and command line tools. I can see the appeal of waveform entry, but the custom-file risk of that outweighs the benefits, for me. -jgArticle: 111121
hi to all, I'm developing a filter for imaging with matlab. In few days the code will be completed but I don't understood if is possible to compile it directly to vhdl, without rewrite all the stuff from scratch... Any experience or suggestion about it? Thanks!Article: 111122
<lerbacattivanonmuoremai@gmail.com> wrote in message news:1162161643.411233.256720@i42g2000cwa.googlegroups.com... > hi to all, > I'm developing a filter for imaging with matlab. In few days the code > will be completed but I don't understood if is possible to compile it > directly to vhdl, without rewrite all the stuff from scratch... > > Any experience or suggestion about it? > Check with the folks at Matlab. I seem to remember reading something just recently about some product that they have that automatically converts Matlab into VHDL or Verilog. Sorry I don't remember more of the details though. KJArticle: 111123
Hi, I am an undergrad student attempting to build a software defined radio device on a Stratix EP1S80 DSP Development Board, and am hoping to do most of the signal processing on a PC. I therefore need to transfer data to the PC at a rate of about 1 MBps (twice the AM IF frequency of 455 kHz at 8 bit signal resolution) - is this possible? What is the best way of establishing communication between the PC and FPGA at such high speeds? Any suggestions are much appreciated, thank you.Article: 111124
On 28 Oct 2006 21:52:14 -0700, entanglebit@gmail.com wrote in comp.programming: > Hello, all -- > > I'm beginning to do some work in the efficient mapping of intensive > algorithms (high orders of complexity) into hardware. I was hoping > that some of you with similar experience may be able to suggest some > resources -- textbooks, journals, websites -- of particular interest. > Your input is greatly appreciated. > > Sincerely, > Julian Kain Xilinx and Altera both have tools that do this, for their specific FPGAs of course. I haven't used either of them, and I don't know the name of the Xilinx tool, but the Altera C to hardware looks interesting. Check their web sites. -- Jack Klein Home: http://JK-Technology.Com FAQs for comp.lang.c http://c-faq.com/ comp.lang.c++ http://www.parashift.com/c++-faq-lite/ alt.comp.lang.learn.c-c++ http://www.contrib.andrew.cmu.edu/~ajo/docs/FAQ-acllc.html
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z