Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
All well and good, unless you work in a company that refuses to actually purchase PC's. When they go EOL (end of lease) that's it - they disappear one night never to be seen again. (the IT dept is nice enough to copy everything in your user folder to the new hard disk, though) We are still fighting to keep an old P3 system that went EOL years ago, and that we are presumably paying penalties on, because it is the last system with a working copy of several packages we need that are no longer supported. It's kind of sad to see so many developers sharing time on what has to be the oldest PC in the department. Of course, there are always those unsuspecting lab machines that haven't ever had design tools installed on them...Article: 102051
In article <1147155282.274065.16140@i40g2000cwc.googlegroups.com>, JJ <johnjakson@gmail.com> wrote: > >Phil Tomson wrote: >> In article <1146975146.177800.163180@g10g2000cwb.googlegroups.com>, >> JJ <johnjakson@gmail.com> wrote: >> > > >snipping > >> > >> >FPGAs and standard cpus are bit like oil & water, don't mix very well, >> >very parallel or very sequential. >> >> Actually, that's what could make it the perfect marriage. >> >> General purpose CPUs for the things they're good at like data IO, >> displaying information, etc. FPGAs for applications where parallelism is >> key. >> > >On c.a another Transputer fellow suggested the term "impedance >mismatch" to describe the idea of mixing low speed extreme parallel >logic with high speed sequencial cpus in regard to the Cray systems >that have a bunch of Virtex Pro parts with Opterons on the same board, >a rich mans version of DRC (but long before DRC). I suggest tweening >them, puts lots of softcore Transputer like nodes into FPGA and >customize them locally, you can put software & hardware much closer to >each other. One can even model the whole thing in a common language >designed to run as code or be synthesized as hardware with suitable >partitioning, starting perhaps with occam or Verilog+C. Write as >parallel and sequential code and later move parts around to hardware or >software as needs change. > >> I think the big problem right now is conceptual: we've been living in a >> serial, Von Neumann world for so long we don't know how to make effective >> use of parallelism in writng code - we have a hard time picturing it. > >I think the software guys have a huge problem with parallel, but not >the old schematic guys. I have more problems with serial, much of it >unnecessary but forced on us by lack of language features that forces >me to order statements that the OoO cpu will then try to unorder. Why >not let the language state "no order" or just plain "par" with no >communication between. > >> Read some software engineering blogs: >> with the advent of things like multi-core processors, the Cell, etc. (and >> most of them are blissfully unaware of the existence of FPGAs) they're >> starting to wonder about how they are going to be able to model their >> problems to take advantage of that kind of paralellism. They're looking > >The problem with the Cell and other multicore cpus, is that the cpu is >all messed up to start with, AFAIK the Transputer is the only credible >architecture that considers how to describe parallel processes and run >them based on formal techniques. These serial multi cpus have the >Memory Wall problem as well as no real support for concurrency except >at a very crude level, it needs to be closer to 100 instruction cycles >context switches to work well, not 1M. The Memory Wall only makes >threading much worse than it already was and adds more pressure to the >cache design as more thread contexts try to share it. I wasn't singing the virtues of any particular parallel architecture (like the Cell) - I brought it up to say that these architectures are now becoming known in the software engineering world and there are a lot of folks in that camp wondering how we're going to effectively develop software for them. > >> for new abstractions (remember, software engineering [and even hardware >> engineering these days] is all about creating and managing abstractions). >> They're looking for and creating new languages (Erlang is often mentioned >> in these sorts of conversations). Funny thing is that it's the hardware >> engineers who hold part of the key: HDLs are very good at modelling >> parallelism and dataflow. Of course HDLs as they are now would be pretty >> crappy for building software, but it's pretty easy to see that some of the >> ideas inherant in HDLs could be usefully borrowed by software engineers. >> >> > >Yeh, try taking your parallel expertise knowledge to the parallel >software world, they seem to scorn the idea that hardware guys might >actually know more than they do about concurrency while they happily >reinvent parallel languages that have some features we have had for >decades but still clinging to semaphores and spinlocks. You have to masquerade as a software guy ;-) > I came across >one such parallel language from U T Austin that even had always, >initial and assign constructs but no mention of Verilog or hardware >HDLs. > >But there are more serious researchers in Europe who are quite >comfortable with concurrency as parallel processes like hardware, from >the Transputer days based on CSP, see wotug.org. The Transputers >native language occam based on CSP later got used to do FPGA design >then modified into HandelC so clearly some people are happy to be in >the middle. > >I have proposed taking a C++ subset and adding live signal ports to a >class definition as well as always, assign etc, starts to look alot >like Verilog subset but using C syntax but builds processes as >communicating objects (or modules instances) which are nestable of >course just like hardware. The runtime for it would look just like a >simulator with an event driven time wheel or scheduler. Of course in a >modern Transputer the even wheel or process scheduler is in the >hardware so it runs such a language quite naturally, well thats the >plan. Looking like Verilog means RTL type code could be "cleaned" and >synthesized with off the shelf tools rather than having to build that >as well and the language could be open. SystemVerilog is going in the >opposite direction. > snip > >The real money I think is in the problem space where the data rates are >enormous with modest processing between data points such as >bioinformatics. If you have lots of operations on little data, you can >do better with distributed computing and clusters. Yes, bioinformatics can be a good application space for FPGAs (depending on what you're doing) snip > >Transputers & FPGAs two sides of the same process coin > Are there any transputers still being made? PhilArticle: 102052
sandeepbabel@gmail.com wrote: > I am working on a chip design, where the frontend is interfaced to > PCI bus and the backend has asynchronous FIFOs and a UART. I load the > data into the FIFO at 33 MHz and then read it at 40 mhz and pass it > to the UART module to transmit the data out. The design works great > in simulation, but it is giving me entirely different results when i > check it through a logic analyzer on the actual chip. I don't know > what I am doing wrong. Please advise. I'm not sure how anyone can help you with such a terse description of what you're trying to do and, worse still, what results you are getting?!? For example, if would be helpful to explain - at the very least - what you're actually measuring, what the measurements are, and how they differ to what you expect. What have you tried - common-sense troubleshooting should provide *plenty* of options before you need to post to this newsgroup? First step is to narrow down where the problem lies. Can you implement a register and successfully read/write over PCI? Can you provide a fixed pattern to the data-out register on the UART and see that? Can you seed the FIFO with fixed data and see that? And so on... As for simulation vs synthesis, most commonly it's due to the simulation's stimulus (inputs) being different to (or only a small, non-representative sample of) the real-world signals. eg. in your case PCI bus transactions, or clocking, such as the phase relationship between your clock domains. Another possibility is that you're relying on non-synthesisable elements of your HDL for 'correct' operation under simulation. Regards, -- Mark McDougall, Engineer Virtual Logic Pty Ltd, <http://www.vl.com.au> 21-25 King St, Rockdale, 2216 Ph: +612-9599-3255 Fax: +612-9599-3266Article: 102053
Hi, What is the fair criteria for sampling the interrupt signal? For high priority critical interrupts and low priority interrupts, when should i decide to configure them to be level triggered or edge triggered. Thanks AshishArticle: 102054
Luke wrote: > I've got a little hobby of mine in developing processors for FPGAs. > I've designed several pipelined processors and a multicycle processor. > I wonder how you manage to find enough time to do all of this! I've only found time during my undergraduate years to build two processors, both with an excuse: A multicycle (>= 12!) processor for a 2nd-year digital logic project (16-bit, 13MHz on Altera EPF10K70) A pipelined processor (no I/O though, so useless) built for clock speed as a summer research project at my university. (32-bit, 250MHz on Altera Stratix EP1S40) I haven't ever found enough time otherwise!Article: 102055
It depends on the hardware that generates the interrupts, and nothing else. If the interrupt is held until the service routine removes it, use level. If the interrupt "just happens", and the device removes it automatically, use edge. Priority is a different issue.Article: 102056
Thanks. This question came from other perspective, there are chances that noise might catchup on interrupt line connecting to FPGA which will be hosting the interrupt controller. Under this condition what would be suitable method of sampling it. We are thinking on clocked sampling.Article: 102057
In article <1147180734.384416.112730@v46g2000cwv.googlegroups.com>, radarman <jshamlet@gmail.com> wrote: > Perhaps - but three SMT capacitors, a bit of wire, and perhaps a bit of > glue is a lot cheaper than rolling your own PWB. This would be a very > simple board mod that almost any beginner could successfully manage, > and it would be very effective. Cripes - I'm no expert at soldering, > but even I've managed to modify boards in this way. > > Also, consider that this allows you to keep all your GPIO (which is > rather limited on the S3E starter kit). If you are successful, you can > still use the hirose connector for something else (like a video > capture board) If you are willing to use up the Hirose port, you could probably kludge up some R/2R ADCs with the resistors hanging out into free space off the pins of a through-hole connector, with no component more than one component away from a connector pin. Not exactly robust, or sound engineering practice, but usable enough. And not much worse than what I was thinking of, which was a clump of capacitors plugged into the board's VGA socket and leading to a VGA socket to plug your monitor into. But it does use up most of your expansion capability. Does anyone make a prototyping board with the Hirose connector? Digilent has some for the pair of 2x20 pin headers on their boards, but not for this new board. But the SMT capacitors solidly attached to the board probably would be a better way to go. Another idea, and I don't know if this is possible. When you set up the constraint file, you indicate what strength each pin gets driven at, and what logic family it simulates. So could you have e.g. a 1.5 V logic at 2 mA for dim and 3.3 V driving 15 mA for bright? Or are the bits that control that not accessible to the logic signals? Or is it otherwise a Bad Idea? -- David M. Palmer dmpalmer@email.com (formerly @clark.net, @ematic.com)Article: 102058
In article <1147203980.950132.150410@e56g2000cwe.googlegroups.com>, Jeff Brower <jbrower@signalogic.com> wrote: > Radarman- > > > We fought for a week with every part of the > > toolchain until we switched to another workstation to create the > > bitstream files - no errors, just corrupt binaries. > > I don't understand the struggle. You are violating basic usage > principles of ISE. Never a) uninstall and re-install, or b) install a > new version on the same machine as an earlier working version. The > only thing permitted on any given machine is service pack upgrades > within a version. Yes that leaves you with the "5.1 machine", the "6.1 > machine", etc. but that's the rule. Your FAE should have told you > that; it's all we've ever heard since 1999 and several different FAEs. Is that a licensing thing or a functionality thing? When I got my Spartan 3e starter kit a few days ago, I installed the included ISE 8.1 under E:\Program Files\Xilinx . When I found out that some things Do Not Work in paths that have spaces in them, I uninstalled it and re-installed it under E:\ProgramFiles\Xilinx . Have I set myself up for a lifetime of pain until I get a new computer? -- David M. Palmer dmpalmer@email.com (formerly @clark.net, @ematic.com)Article: 102059
Hello all, I am doing a design with block RAM. The register block is part of the full design which is causing concern. The block diagram of the register block is at the link http://esnips.com/doc/58fb6919-cf9a-4d62-bfe5-6fef95605a5a/block_diagram.jpg It consumes almost 8K of LUT and 3K of FFs and 128 BRAM in a V4Lx60. And the total count is 33K of LUT. The register block is operating in two clocks as shown in figure. clk1x and clk4x = 4*clk1x. The paths i want to be constrained are indicated with a * sign in the block diagram. Initialy i was working with two independent clocks that is clk1x and clk4x was comming externally. And i specified no relation between these clocks. And was able to constrain the delay of the * blocks upto 6ns. I gave only from to constrain. After the initial experiments i included a DCM block to generate the clk1x and clk4x. Which is also indicated in the diagram. But now when i give period constrain to clk4x or clock in the design is not routing. Why it is like that. I gave 10ns to clk4x (initially it worked with 6ns) but stil is not routing. Applied area group constrains to clk4x and clk1x domains the routing problem is little bit reduced but still there. Now it shows initial time score as 3000000. I want to understand what is the change happend to the design after including a DCM to generate two clocks. Is there any special consideration i should apply to the design. One more thing i want to know is the routing delay between BRAM and LUTs. As BRAM are spread in the entire chip. IS there any special routing resource to handle this. I am planning to include a buffer between the BRAM and the combinational logic. And then clock the BRAM with -ve edge and latch the values into buffer at +ve edge. Will that improve the timing. PLease give your expert comments on these issues. Thanks and regards Sumesh V SArticle: 102060
That might be the ultimate solution.. =D /Johan Symon wrote: > Of course, if you've got those run-on-flat tyres maybe you don't care about > the pressures? > > http://www.funny-videos.co.uk/downloads/runonflat.wmv > > :-) > Cheers, Syms. > > -- ----------------------------------------------- Johan Bernspång, xjohbex@xfoix.se Research engineer Swedish Defence Research Agency - FOI Division of Command & Control Systems Department of Electronic Warfare Systems www.foi.se Please remove the x's in the email address if replying to me personally. -----------------------------------------------Article: 102061
Sanka Piyaratna schrieb: > Hi, > > I am wondering if there is anyone who has worked out a way to use ISE > 8.1 projects with Makefiles to compile FPGA images. I am actually > wondering if it would be possible to automatically generate a Makefile > from the project file. > > Thank You, > > Sanka. Hi Sanka, not from the projectfile anymore, as said in a previous posting, but from other files: the command.log file the *.prj file(s), containing a list of the used sources the *.xst file(s), containing the XST option settings You can learn more about these files by reading the XFLOW documentation. The XFLOW tool targets skript based use of the ISE tools, like the one John posted. Skripts are a good thing, but make-ing has still some advantages. If you succeed in any way with your idea, let us know! have a nice synthesis EilertArticle: 102062
Also will there be any improvement in not specifying the global constrain like setting period for the 4x clock and instead specifying the needed constrain that is only the from to constarin for the blocks i have indicated in the block diagramArticle: 102063
Hi Keith, > I have a fairly large Altera-based design that will soon be updated to > Cyclone II and Quartus (from Flex10K and Max+II). > > Has anyone else been through this migration that would be willing to share > any gotchas? Is the migration tool in Quartus worthwhile? The Quartus functionality for importing MaxPlusII designs works pretty well. The only things you have to watch out for are: - MaxPlusII block diagrams can be read without problems, but cannot be written back again in their native format. Youl have to use Quartus's .bdf format for that - Some assignments like CLIQUE and so an are not available in Quartus - you'll have to use LogicLock regions for that. Apart from that, once you've set the device to a CycloneII, redone the pinout (the Pin Planner feature is very helpful in this), and double-checked the timing, you should be all set. And, marvel at the performance increase once youe compiled the design. Best regards, BenArticle: 102064
When I try to debug a program in eclipse the console sais :No source file named main.c. (This message appears 5 time and dissapairs quikly again.) If I try to configure the debugger an Application Error appears: mb-objdump.exe sais "The instruction at <Adress> referenced memory at <Another adress>. The memory could not be 'read'". In software debugger I only get the Assambly view and if I try to do anything (like single step) it seams that the debugger if "frosen". I am unable to do anything. Someone please help me Regards frustrated Raymond :(Article: 102065
easier ride? how much easier? just yesterday I wrote test application that allocates system dma buffer and sends the physical address of it to the pci target that then starts master transaction. the PCI backend logic needed about 20 lines of verilog for the WinXP test application I wrote about 15 lines of Delphi code you say on linux it would be easier? well if you have linux box in your bedroom then maybe :) Antti PS actually linux device drivers are fun, I agree, but quick dirty direct hardware programming on WinXP is simple as well.Article: 102066
> Would a VM like VMWare or VPC do the trick, keep your OS with ISE > version installed in different virtual boxes. As long as only one runs > all the machine resources are available to that VM guest, whether its > Linux or Windows. Do you say that this would still run at full speed? has anyone tried this? I suspect the emulation takes quite some ressources ... bye, MichaelArticle: 102067
> Skripts are a good thing, but make-ing has still some advantages. is there a way to handle some incremental flow with makefiles ..? I know the process from c-programming ... only modified files need to compile - but all of them need to be linked ... could that save some time for FPGAs as well? the synthezize step only needs to handle modified files ... as I understand the flow the next step is merging everything (and optimizing across modules)? bye, MichaelArticle: 102068
JJ wrote: > I think the software guys have a huge problem with parallel, but not > the old schematic guys. I have more problems with serial, much of it > unnecessary but forced on us by lack of language features that forces > me to order statements that the OoO cpu will then try to unorder. Why > not let the language state "no order" or just plain "par" with no > communication between. Parrallel programming as both an undergraduate and graduate class has been available for a while on most computer science and computer engineering campuses, and is a core consideration in many undergraduate and graduate computer architecture classes as well. While some older software guys that haven't kept current in their trade may be unaware, I suspect there are as many older hardware guys that have difficultly with VHDL, Verilog, SystemC, and current modeling and simulation techniques too. Parallel hardware comes in many different flavors, and there are many different solutions to specific and sets of those implementations, frequently specific to industries, application areas, and home schools of research. The complaints here are less than conclusive and complete, as they pick and choose some of the worst .... anyone can trash many hardware designs in a similar fashion as lacking complete knowledge of how to implement a system that software can actually run on well without a bunch of hacks. Good designs happen when each of hardware, systems software, applications software, and good architects get togather and fit the system design to the problem, rather than leaving hardware guys to go off and have some confused idea that software guys have to patch up later.Article: 102069
Ciao a tutti sono uno studente di ingengeria informatica, vorrei avere informazioni a rigurdo del progetto JHDLBITS, qualcuno sa dirmi percaso dove trovare i file sorgenti, o quantomeno i file relativi al modulo ADB? Grazie per il vostro aiuto. Hello, I'm interested in the fpga world but i can't find any information about jhdlbits project. How i can find jhdlbits's source file? In particular how i can find ADB database? Thank you!! Help me!!! :)Article: 102070
Is there any good EDIF simulator around? I need to use it in my project. It will be good if it is free/cheapArticle: 102071
Quartus II 6.0 is now available, I downloaded the Web Edition yesterday. I'm on the Altera mailing list but I don't remember seeing any notification of the new version. It worked OK when I tried it on a couple of small projects. LeonArticle: 102072
I usually just go straight to HDL. I like to prototype different pieces of hardware in Verilog while I'm still designing the overall architecture so that I have a better idea of what exactly I'm trading off in terms of clock speed and utilization when I chose one design over another. It really does end up being an iterative process. The final design will be done in VHDL. I'm shooting for 50MHz, but I have a feeling the bypass logic simply won't be able to go that fast. The bypass logic ends up being the critical path for all of the processors of this type that I've looked at, so I'll be paying special attention to it. Once I get the first stage implemented (basically just a dataflow processor), I'll be able to experiment with adding an extra pipeline stage in the bypass logic (kinda defeats the purpose, ohh well). That'll probably lower the overall IPC, but it may or may not increase the clock frequency enough to be an advantage. So hopefully I'll have the critical path laid out at an early enough stage that I will be able to do something about it without too much grief. Then again, there will still be plenty of opportunities for other critical paths to pop up that I hadn't thought of. I think 25MHz is probably a reasonable estimate.Article: 102073
No, it's not generally a licensing problem. It's more a problem with lousy programming, and the different versions tripping over each other. The software depends on environment variables instead of using the registry for a lot of things, and depending on which is first, you can end up calling the old version with the new version, or in my case, vice-versa. The sad thing is, Xilinx gets the rocket science (mapping and PAR) right - but screws up on the little stuff. None of these problems are endemic to Windows applications. For some reason, though, when engineering companies release software, they always ignore the spit and polish. Modeltech did it (and now that they are owned by Mentor Graphics, it's even worse). Synplicity does it - particularly with that HUGE "RUN" button and other non-standard dialogues. You would think it would be simpler to just write a decent Windows shell, a separate Unix shell, and then use portable C or C++ for the core application than to maintain all that homegrown cruft. Heck, ModelSim is actually *LESS* usable now at 6.1 than it was in 5.8! Usability aside, though - the tools don't use standard OS features that could help prevent these problems. Most applications use the registry in Windows, where it is fairly simple to setup a structure that allows for multiple versions. In Unix, it is fairly simple to keep all your configuration files in the same folder, and create Makefiles that call out which tool to use. It isn't hard. If *I* can do it, with just an average knowledge of C and almost zero C++, I suspect the developers at most of these engineering companies can do it as well.Article: 102074
Sean Durkin wrote: > Sanka Piyaratna wrote: > >>Hi, >> >>I am wondering if there is anyone who has worked out a way to use ISE >>8.1 projects with Makefiles to compile FPGA images. Here is a link to a makefile app note on the XESS website: http://www.xess.com/appnotes/makefile.html --- Joe Samson Pixel Velocity
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z