Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Ray, Thanks for the excellent advice and help. A few comments/responses. - The implant is actually inductivly powered through the skin and there is no battery. For the xlinix chips I was looking at, quiesent power consuption was under 30uA which is perfectly fine. We're looking at having about 50-100mA @ 3.3v available to use for all components (amp, a2d, uC, cpld, telemetry system). - The async verilog code will only be run once every few days, and will be controlled from a microcontroller that I can tweak to guarantee sufficient time between each state change. I do have a clock available that I could feed into it to make it a synchronous design - but I'll go for the async first to see if I can get it to work. - I'll look into finding a copy of Morris Mano's book. Thanks, Reza p.s. I've added some extra code (see below) that is synthesized just fine on one of the xilinx CPLDs, but gives errors if I configure it for another CPLD types. It says it can't find a matching template for the other cpld architectures. I also found that I had to do the posedge and negedge explicitly. I thought that if I left that out, any state change for the signal would initiate the block. /* cs toggler - cs goes high for one clock cycle every 17 clk_in cycles */ always @(posedge clk_in or negedge clk_in) begin sck = !sck; spi_counter <= spi_counter + 1; if (spi_counter == 16) cs <= 1; if (spi_counter == 17) begin cs <= 0; spi_counter <= 0; end endArticle: 93451
If you can store a buffer of at least one extra scanline, you could try the Paeth predictor + RLE. This will give reasonable prediction of the next pixel's grayscale value, and if the prediction is OK, the result will often contain a string of zeroes, and the RLE will do a good job. If you can "afford it" (in other words, the FPGA is fast enough), you could use arithmetic coding on the resulting predicted values, with a simple order 0 model, instead of RLE. Paeth + RLE will do OK on computer generated images, but not on natural images. Paeth + AC will do OK on both. Both will fit in 1kb of code for sure. NilsArticle: 93452
Hi Gabor, Thank you for your response. A good paper. I printed it out. WengArticle: 93453
I treid to use the tanslate mode of cordic. Unfortunately I always get a wrong phase_out result in the simulation. Is it a known bug?Article: 93454
On 20/12/2005 the venerable Melanie Nasic etched in runes: > Hello community, > > I am thinking about implementing a real-time compression scheme on an > FPGA working at about 500 Mhz. Facing the fact that there is no > "universal compression" algorithm that can compress data regardless > of its structure and statistics I assume compressing grayscale image > data. The image data is delivered line-wise, meaning that one > horizontal line is processed, than the next one, a.s.o. Because of > the high data rate I cannot spend much time on DFT or DCT and on data > modelling. What I am looking for is a way to compress the pixel data > in spatial not spectral domain because of latency aspects, processing > complexity, etc. Because of the sequential data transmission line by > line a block matching is also not possible in my opinion. The > compression ratio is not so important, factor 2:1 would be > sufficient. What really matters is the real time capability. The > algorithm should be pipelineable and fast. The memory requirements > should not exceed 1 kb. What "standard" compression schemes would > you recommend? Are there potentialities for a non-standard "own > solution"? > > Thank you for your comments. > > Regards, Melanie Have a look at Graphics File Formats by Kay & Levine (ISBN 0-07-034025-0). It will give you some ideas. -- John BArticle: 93455
I had the same message. It never caused any problems and I continued for weeks that way. I corrected it by creating a new project and moving all my code into it. Message went away. Steve On 19 Dec 2005 23:58:32 -0800, "Raymond" <raybakk@yahoo.no> wrote: >I have an ISE project where I have a Microblaze submodule, containing 2 >CPUs. When I try to make timing constrains a warning popps up: > ><<You have more than one instance of the EDK module MyBlaze. This will >not implement correctly. >> > >This is not true. I have only one instance of my dual processorcore. > >Is it safe to ignore this warning? > >What can I do to prevent this warning? > >RaymondArticle: 93456
In comp.arch.fpga Melanie Nasic <quinn_the_esquimo@freenet.de> wrote: : I want the compression to be lossless and not based on perceptional : irrelevancy reductions. If it has to be lossless there's no way you can guarantee to get 2:1 compression (or indeed any compression at all!). You may do, with certain kinds of input, but it's all down to the statistics of the data. The smaller your storage the less you can benefit from statistical variation across the image, and 1 Kbyte is very small! Given that a lossless system is inevitably 'variable bit rate' (VBR) the concept of "real time capability" is somewhat vague; the latency is bound to be variable. In real-world applications the output bit-rate is often constrained so a guaranteed minimum degree of compression must be achieved; such systems cannot be (always) lossless. From my experience I would say you will need at least a 4-line buffer to get near to 2:1 compression on a wide range of input material. For a constant-bit-rate (CBR) system based on a 4x4 integer transform see: http://www.bbc.co.uk/rd/pubs/whp/whp119.shtml This is designed for ease of hardware implementation rather than ultimate performance, and is necessarily lossy. Richard. http://www.rtrussell.co.uk/ To reply by email change 'news' to my forename.Article: 93457
johnp wrote: > Reza - > > I suggest you do some studying on your own rather than asking this > group to design your circuit. > > On your tri-state question, try Google "verilog tristate", then do some > reading. > > John Providenza I have to agree. No offense, but if you are in an EE graduate level course, it is shameful that you are asking what a "Digital Design" course would cover. And everyone has been more than helpful here. There are SO many resources available on the Internet with respect to Verilog and Xilinx CPLD/FPGA's. I just started working with the group in our company that does FPGA systems design and there IS a learning curve. I have had to do a lot of reading, a lot of RTL coding and simulation, and a lot of experimentation. I have asked a couple of questions on this forum, but have not relied on it for an absolute answer. Don't get me wrong. It is a great resource, but you are doing yourself a disservice if you won't try to figure this out on your own. Trust me. An employer will see this immediately and you won't make it very far. Just try.Article: 93458
"If A do a else do b If !A do b else do a" In regards to the above. These statements make "logical" sense. But it is BAD design practice when writing RTL code. It JUST is. I have never heard or read any differently. Put your "reset" condition first, then follow that with your other conditions. I don't pretend to be an expert, so if anyone has a different opinion, please let me know. I don't know why the synthesizer has a problem with it, but if it not immediately intuitive at first glance (which it wasn't), then it very likely will get more difficult to synthesize. I personally think that putting the reset condition first like "if (reset condition)..." makes complete sense. I don't know. I guess I have looked at too much RTL code here, read too many of our format docs, etc to see it any differntly. But when you have to write code that will ultimately be used in an ASIC (not me...others in the company) costing a freaking ass-ton of money to develop, you tend to keep things consistent. I would not spend too much time worrying about why your particular construct wouldn't synthesize. Instead, focus on the de facto standards that are used. Any Verilog book I have seen, and any Verilog website with examples that I have seen will help. Writing RTL code is not a field where you want to be creatively different with regards to the format. The best advice I can give is: Do NOT write ANY code until you know EXACTLY how the circuit will work. This includes FF's and the combinational logic attached to them. I sometimes draw this out on paper. Then you write RTL code using your drawing. Then it is just the process of translating that drawing to code, which should be relatively trivial.Article: 93459
Ray, Some comments, Austin -snip- > What is missing is geographic relationships between parts of the > circuit. Perhaps the biggest piece missing in the current tools is > utilization of the hierarchy in a design. As I said, there is a lot of room for improvement here. You are assuming that the hierarchy is well done, and that the results from working on each piece alone will do better. Just don't know if that is true. Good area for work, I would agree. > Now, I do disagree with your assertion that each generation of the tools > improves both run time and quality of results. I have to differ here. I understand your issues, but if we deal with the ever expanding "standard suite" of test designs with better performance, and better run time, I have to assert that it is better. Is everything better? Of course not. > One of the biggest steps backward came from eliminating delay based > clean-up (IIRC that happened in 5.2). I happen to agree with you here, my personal opinion is that the tools should allow you to choose to go to the extra effort to find the best paths, and not stop as soon as the aggregate requirements are met (or stop and give up if it can't meet the requirements). I think you will appreciate that what was done did provide for a much faster time to get the design. We do make the parts bigger every generation, and you may have noticed, processor power is not keeping up anymore.Article: 93460
Jim, Some comments, Austin -snip- > Austin, perhaps if you used engineering measurements for SW results, > rather than the words like "wizards" and "magic", then the SW might have > a chance to really improve with each release ? The software group has a very rigorous quality of results metrics (measurement) system for evaluating their work. I get to use the superlatives, they do not. > I did wonder how Altera suddenly found power savings in SOFTWARE - We still beat them on power, ask your FAE for the presentation. They took a really lousy situation and made it just plain lousy. We still have a 1 to 5 watt advantage, AFTER they run their power cleanup. > Given the enomous investment the companies claim, these field results > seem rather abysmal - seems the HW is carrying the SW ?. Rather, the software is now (often) carrying the hardware. Very hard to get the latest technology to be any faster than the previous one, without architecture and software. If the software buys a speed grade, that is all the customer cares about. The silicon get less expensive with the shrink to the next generation. Who cares if the performance came from software, hardware, or both? > Still, it does seem there is indeed a lot of 'fat' in Place & Route SW, > so we can expect further 'double digit improvement' claims.... :) I agree. Until the tools do a better job than a room full of experts, the tools are just not 'there'. Reminds me of compilers for high level languages many years ago: there was a time I could write assembly code that was faster, better, smaller, than any compiled high level language (anyone recall PLM86?). Then after a while, the compilers got better and better. Until finally, I had to agree that all that work was not worth it: often the compiler yielded a better solution that my hand written assembly code.Article: 93461
You've had people recommend that you "think hardware" and I'm of that camp. It's one of the reasons for using "templates", each template corresponds to a specific "piece" of hardware (often called a library cell in ASIC design, there are similar concepts in FPGA design, but I don't know if the terminology is changed). In fact, the original synthesizers were close to template matchers. They simply looked at the code and matched it to a library of templates and if it found a match, the synthesizer lay down the piece of hardware corresponding to that template. Now synthesizers have grown much smarter, but the basic concept still holds. This is why, it is important for you as a designer to follow the templates carefully. If you write verilog that matches the templates, you can predict exactly what kind of hardware the synthesizer will create. If your verilog is a little off, the synthesizer can bend a little to match the templates, but it may do so by either matching a template that you don't expect or by inserting extra logic etc. For example, a "tri-state buffer" in the library I'm failiar with look like: assign out[31:0] = enable ? value[31:0] : 32'bz; If you put this fragment in your verilog and use the library I mentioned, you will instantiate 32 parallel 1 bit buffers with a tri-state enable pin, which copy value to output when the enable is true and leave the value floating if enable is false. You can vary this template to get 32 distinct enable pins--personally, I would write that as 32 1 bit assignments, but I'm not a chip designer, just an architect, so there may be a shorter solution. However, I would not attempt to put this code into a "clocked always block". To me that is tempting the synthesizer to match a different template than I expected (and tri-state buffers are something I know have specific semantics that I want followed exactly and not have the synthesizer "winging it"). That's part of the thinking hardware aspect. I have a specific piece of hardware in mind when I write the verilog code. I don't mix templates together willy-nilly, as the synthesizer may figure out what I want and may not. There are some places where I allow more latitude, because I know the synthesizer can generate some circuit that will work and the chip designer can modify the verilog if the synthesizer doesn't generate a circuit which is "good enough"--state machines are a good example of where this latitude is usually ok. This now bring up the last example, you have given: > always @(posedge clk_in or negedge clk_in) begin ... I don't know what kind of hardware you expect to react exactly to both rising and falling edges of your clock (and nothing else). Most synthesizers don't know of such logic either. Therefore, this line is a recipe for disaster. If you can explain what kind of gates you expect it to create, there may be another way to write the code. However, the synthesizers that are complaining that they can't find a matching template are doing so for good reason. One final little point on this topic, the code in the templates makes the verilog behavior match hardware behavior under a set of assumptions. These assumptions are normally met under the "clocked digital design regimen". That is, if we only look at the values when things are stable (in between clocks and when things have had enough time to settle), the behavior the verilog code models and the behavior the hardware exhibits will match. However, there are things which happen in the hardware that don't happen in the verilog model, and vice versa. Thus, you can in verilog ask if a signal is floating, but there probably isn't a piece of hardware that can ask that question. Designers who forget that the hardware and the verilog "aren't the same" and only behave the same under controlled conditions, often write verilog code that corresponds only to imaginary hardware that can't be built. Gross violations won't get through the synthesizer. More subtle violations will result in pre- and post- synthesis mismatches or worse a design that simulates but fails in real hardware under conditions that violate the assumptions of the simulations. Hope this helps, -Chris ***************************************************************************** Chris Clark Internet : compres@world.std.com Compiler Resources, Inc. Web Site : http://world.std.com/~compres 23 Bailey Rd voice : (508) 435-5016 Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours) ------------------------------------------------------------------------------Article: 93462
mottoblatto@yahoo.com wrote: > The best advice I can give is: Do NOT write ANY code until you know > EXACTLY how the circuit will work. This includes FF's and the > combinational logic attached to them. I sometimes draw this out on > paper. Then you write RTL code using your drawing. Then it is just > the process of translating that drawing to code, which should be > relatively trivial. Why not draw the circuit on a computer with a schematic capture package then? Why the overhead of going through Verilog and synthesis? Either that, or your advice is not that good really. Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Losbergenlaan 16, B-3010 Leuven, Belgium From Python to silicon: http://myhdl.jandecaluwe.comArticle: 93463
It is good advice if you are just starting out designing using an HDL. If you don't have a picture of the circuit you are describing, then you may write crappy code. I lied. I do not draw the circuit out. But I have in the past when I was first starting with simple logic...like what the guy is trying to do. It is SIMPLE. I can visualize digital hardware and code from there, but I didn't want to muddy the waters. Schematic capture would be fine for this example, but defeats the purpose of trying to learn the HDL. I'm just trying to give helpful advice to someone who obviously does not know what is going on. "What would a Digital Design class cover?" Come on. This guy needs to draw a circuit!! I am surprised at your response though. Being an "electronic design consultant", I would think that you would agree with what I said. Maybe it didn't come across well. I didn't mean "draw your seriously complex design, and then code it". Do you write code before you know what your circuit is going to do, or how it will work? If so, I wouldn't use you as a consultant. Jan.Article: 93464
Dear Group, I am designing an LCD controller, straight VGA using a 6.5" TFT (60Hz and a 25MHz dot clock) with a 32MBit SDRAM frame buffer (using two Xilinx block RAMs as alternate line buffers). Xilinx ISE 7.1 development environment. I also designed the hardware: a BlackFin BF532 CPU/ROM/SDRAM with a Xilinx Spartan-3 (XC3S400-4PQ208C) handling the general I/F requirements and VGA display. It is a 4-layer PCB with a 25MHz master clock. Until now, all my coding has been with VHDL and everything has simulated correctly. Come on now... Just how difficult can it be? Or so I thought. Well, everything works just fine for about 200ms (correct syncs etc...) and then things go dead for 800ms -- no syncs, nothing. And then things spring back to life for another 200ms ad nauseam... The internal clocks are running fine (I have some spare signals on the FPGA to which I can bind some debug signals). Since I am pretty much a newbie to VHDL, I figured I would design a replacement VGA module using gate-level logic (counters, comparators and so on) but it generates exactly the same results. The power supplies appear to be clean. The BlackFin continues to work with a command-line UBOOT serial link. This project has taken 6-weeks longer than it needed to. I look an idiot (which I probably am) and only the thought of how much pleasure it would give my mother-in-law stops me from 'doing myself in'... Any kind or informative suggestions would be very much appreciated. Regards & seasonal greetings to all. PeterArticle: 93465
There isn't enough info in your post to provide any detailed debugging opinion, but it looks like you must have misunderstood something in either the way that the Blackfin or TFT interfaces works as both your VHDL code and your gate-level code fail in the same way. You have a repeatable and consistent problem which is much better than a random failure so you should be able to track down the culprit. If you included a JTAG port on your PCB that is connected to the 3S400 then I would suggest that you try inserting a ChipScope ILA debug core into your design. This will give you the ability to probe and trigger on the internal buses to see where the failure is. If you don't already have a license for this you can get a 60-day full feature eval at http://www.xilinx.com/chipscope that should be plenty of time to find and correct your problem. Ed peter.halford@alarmip.com wrote: > Dear Group, > > I am designing an LCD controller, straight VGA using a 6.5" TFT (60Hz > and a 25MHz dot clock) with a 32MBit SDRAM frame buffer (using two > Xilinx block RAMs as alternate line buffers). > > Xilinx ISE 7.1 development environment. > > I also designed the hardware: a BlackFin BF532 CPU/ROM/SDRAM with a > Xilinx Spartan-3 (XC3S400-4PQ208C) handling the general I/F > requirements and VGA display. It is a 4-layer PCB with a 25MHz master > clock. > > Until now, all my coding has been with VHDL and everything has > simulated correctly. > > Come on now... Just how difficult can it be? Or so I thought. > > Well, everything works just fine for about 200ms (correct syncs etc...) > and then things go dead for 800ms -- no syncs, nothing. And then things > spring back to life for another 200ms ad nauseam... > > The internal clocks are running fine (I have some spare signals on the > FPGA to which I can bind some debug signals). > > Since I am pretty much a newbie to VHDL, I figured I would design a > replacement VGA module using gate-level logic (counters, comparators > and so on) but it generates exactly the same results. > > The power supplies appear to be clean. The BlackFin continues to work > with a command-line UBOOT serial link. > > This project has taken 6-weeks longer than it needed to. I look an > idiot (which I probably am) and only the thought of how much pleasure > it would give my mother-in-law stops me from 'doing myself in'... > > Any kind or informative suggestions would be very much appreciated. > > Regards & seasonal greetings to all. > > Peter >Article: 93466
Reza Naima wrote: > Rob, > > I never asked if a microprocessor could read a 'z', I asked if a CPLD > could determine if an input was floating or not - though I doubt it > could. And as I've stated, I'm a graduate student, so I'm obviously > not working in industry. I take it you like to skim. > > I found the RTL schematic viewer - though I sitll am not sure what RTL > stands for. > > Reza > RTL = Register Transfer Level. It means the design is specified in the HDL by describing the registers and the logic that goes between it. RTL is generally considered to be device independent, but as giving enough detail to make synthesis fairly easy. Contrast this with a structural description, which is basically a netlist containing the FPGA primitives and the wiring connections for them. Structural implementations are device specific, and leave nothing for the tools to infer during synthesis; also contrast with behavioral which describes the black box function of the design but not the details for the implementation. Behavioral descriptions are generally not synthesizable.Article: 93467
> Come on now... Just how difficult can it be? Or so I thought. > > Well, everything works just fine for about 200ms (correct syncs etc...) > and then things go dead for 800ms -- no syncs, nothing. And then things > spring back to life for another 200ms ad nauseam... > > The internal clocks are running fine (I have some spare signals on the > FPGA to which I can bind some debug signals). > ? Frame Store address generation - if you fail to reset a frame store pointer you will walk through memory and probably through hyperspace ... > Any kind or informative suggestions would be very much appreciated. Open a bottle or two and chill out - let your subconscious work on it ! > > Regards & seasonal greetings to all. >Article: 93468
I am evaluating Synplicity and am trying to run a comparison on the system that we have developed in the EDK. Basically, I want to do a side by side look at how Synplicity synthesizes the design as compared to XST. There are a couple of papers out there that explain how to do this process -- sort of. I am still not sure how to tie everything together within Synplicity. I have set the EDK to generate the netlist without synthesizing anything. So you end up with a system.v file that has all the global I/O and instatiations of all the IP. I have pulled all of those files into Synplicty, tagged the system.v as the top level, and tried compiling. Synplicity doesn't know where some of the libraries are, complains about some other things, and quits. OK, fair enough. I could create a top.v file where I instantiate the system and tie all the I/O correctly, but I still don't know if that is going to work. And I could point it to the libraries as well. I just want to know if there is a better way to go about this. The EDK is really nice in that it takes care of all the behind-the-scenes stuff. Thanks, TomArticle: 93469
"motty" <mottoblatto@yahoo.com> schrieb im Newsbeitrag news:1135274235.908537.217100@g49g2000cwa.googlegroups.com... >I am evaluating Synplicity and am trying to run a comparison on the > system that we have developed in the EDK. Basically, I want to do a > side by side look at how Synplicity synthesizes the design as compared > to XST. > > There are a couple of papers out there that explain how to do this > process -- sort of. I am still not sure how to tie everything together > within Synplicity. I have set the EDK to generate the netlist without > synthesizing anything. So you end up with a system.v file that has all > the global I/O and instatiations of all the IP. > > I have pulled all of those files into Synplicty, tagged the system.v as > the top level, and tried compiling. Synplicity doesn't know where some > of the libraries are, complains about some other things, and quits. > OK, fair enough. I could create a top.v file where I instantiate the > system and tie all the I/O correctly, but I still don't know if that is > going to work. And I could point it to the libraries as well. > > I just want to know if there is a better way to go about this. The EDK > is really nice in that it takes care of all the behind-the-scenes > stuff. > > Thanks, > > Tom > well if you go behind the scenes you are pretty much alone, once upon a time before EDK there was V2PDK and Synplify was the only synthesis tool useable for it, today its other was around XST is pretty much a must for EDK, you can use synplify for different ipcores uses in the EDK system but then at the end still use EDK/XST to bind it all together. some of the EDK ip cores (MicroBlaze) as example are converted from compressed binarary HDL to NGC files, and synplify can not read them or produce those formats. if you want some synplify 'comparison' talk to them directly they do lots of inhouse testing themself. An attempt to use EDK system for XST <> Synplify evaluation is way too troublesome AnttiArticle: 93470
Jim Granville wrote: > > Yikes! > One wonders how _CAN_ SW make a carefully floorplanned design go > backwards ? By how much ? > > Is that the lazy routing, being so bad, it actually finds a longer > path, than earlier SW ? Enough to make so a design that passed timing with the earlier tools will not pass timing no matter what you do with the newer tools short of hand routing it. about 10% loss in performance average in each major revision. There was a huge hit going to 5.2. 7.1 seems to have a much smaller degradation from 6.3. Yes, the routing got lazy so that it actually finds a longer path than it did with earlier software. Quite often, it will not find the direct connection to a neighboring cell, and instead routes it all over the place, which adds delay, increases power consumption, and congests the routing resources so that other nets also get a circuitous route so that the overall timing is even further degraded. > I did wonder how Altera suddenly found power savings in SOFTWARE - > perhaps they now do exactly this, clean up messy, but timing legal, > routes ? Anyone in Altera comment ? From what I understand, Altera is moving toward more delay based clean-up. Xilinx has moved away from it, and is instead pursuing capacitance based clean-up to reduce the power...which not only may miss the mark, but also requires toggle rate information for each net.Article: 93471
Bottle is already open... Living in Greece, can I suggest how nice: CAIR sparkling-wine is. It is methode-Champagnoise (meaning it is made in the same way as the sparkling wine from the Champagne region of France), but at a fraction of the cost. Actually, it is made in Rhodes (a rather large Island here in Greece). However, all I can think is that somehow the BlackFin is resetting the array. Unfortunately, I didn't write any of the code nor UBOOT, so I guess tomorrow (they're very small legs and too much CAIR) I will look at all of the FPGA control signals... Please keep the suggestions coming (apart that is from YOU, mother-in-law). PeterArticle: 93472
Austin, From what I have seen, folks who use hierarchy generally do a decent job of it. You really have to work hard at making a hierarchical design worse than a flat design. Hierarchy puts organization in the design, and because crossing levels of hierarchy is a little bit painful, it forces the designer to think in terms of components and to group related stuff together. Even in a poor example of hierarchy, there is at least a little bit of grouping done, and therefore information the tools can use. I and others have been asking for hierarchical tools from Xilinx for close to 15 years. I honestly don't think Xilinx understands why using hierarchy is a good thing. Austin Lesea wrote: > I have to differ here. I understand your issues, but if we deal with > the ever expanding "standard suite" of test designs with better > performance, and better run time, I have to assert that it is better. Is > everything better? Of course not. Fine, but improvements shouldn't break existing designs. Nearly every single one of my designs over the past 5 years gets better results with the tool that was current at the start of the project than it does with later versions of the tools. I could accept a low rate of recitivism, but close to 100% is criminal. I know I am not the only "power user" running into this, in fact it regularly comes up as a subject here at each major release of the tools. > stop and give up if it can't meet the requirements). I think you will > appreciate that what was done did provide for a much faster time to get > the design. Ummm, well no. The tools give you faster time to completion for a run through the tools, but that doesn't help if the design does not meet the timing requirements. It actually takes longer to complete a design because you need to iterate on the place and route much more than when there was a predictable routing solution for a good placement. Faster completion in the tools does not equal faster time to get the design done. > > We do make the parts bigger every generation, and you may have noticed, > processor power is not keeping up anymore. > Yup, and Hierarchy can help you tremendously here. Routing complexity (and therefore effort needed) goes up with roughly the square of the device size measured in LUTs, primitives, cells etc. By breaking it down into hierarchical sub-assemblies, you end up with N smaller k/N problems, so the effort is smaller than k^2.Article: 93473
In message <1135100454.531.0@echo.uk.clara.net> "John Adair" <removethisthenleavejea@replacewithcompanyname.co.uk> wrote: > As I think everyone else has said TTL levels between the devices should work > ok. The 9500XL parts are 5V tolerant so you can drive them with a 5V signal > source. That's good to know. I'll have to see if I can get a 95144XL to communicate with a Gameboy CPU at some point. I know those things can handle 5V logic levels, but I'm not sure about 3.3V, and there's basically no documentation on that side of things. Well, not without much NDA signing anyway. > You can get some of these CPLDs from Farnell in the UK with duty issues and > they usually take credit cards. Official distributor is Silica in the UK. What do you mean by "duty issues"? I've tried Silica - they're not willing to supply anyone without company registration documents. Shame Memec aren't distributing Xilinx kit anymore - they were pretty good about supplying small quantities of parts. Thanks. -- Phil. | Acorn RiscPC600 SA220 64MB+6GB 100baseT philpem@philpem.me.uk | Athlon64 3200+ A8VDeluxe R2 512MB+100GB http://www.philpem.me.uk/ | Panasonic CF-25 Mk.2 Toughbook No software patents! <http://www.eff.org/> / <http://www.ffii.org/> A seminar on time travel will be held two weeks ago.Article: 93474
peter.halford@alarmip.com wrote: > Dear Group, > > I am designing an LCD controller, straight VGA using a 6.5" TFT (60Hz > and a 25MHz dot clock) with a 32MBit SDRAM frame buffer (using two > Xilinx block RAMs as alternate line buffers). > > Xilinx ISE 7.1 development environment. > > I also designed the hardware: a BlackFin BF532 CPU/ROM/SDRAM with a > Xilinx Spartan-3 (XC3S400-4PQ208C) handling the general I/F > requirements and VGA display. It is a 4-layer PCB with a 25MHz master > clock. > > Until now, all my coding has been with VHDL and everything has > simulated correctly. > > Come on now... Just how difficult can it be? Or so I thought. > > Well, everything works just fine for about 200ms (correct syncs etc...) > and then things go dead for 800ms -- no syncs, nothing. And then things > spring back to life for another 200ms ad nauseam... > > The internal clocks are running fine (I have some spare signals on the > FPGA to which I can bind some debug signals). > > Since I am pretty much a newbie to VHDL, I figured I would design a > replacement VGA module using gate-level logic (counters, comparators > and so on) but it generates exactly the same results. > > The power supplies appear to be clean. The BlackFin continues to work > with a command-line UBOOT serial link. > > This project has taken 6-weeks longer than it needed to. I look an > idiot (which I probably am) and only the thought of how much pleasure > it would give my mother-in-law stops me from 'doing myself in'... > > Any kind or informative suggestions would be very much appreciated. > > Regards & seasonal greetings to all. As you seem to have a behaving system some of the time, lock onto that, and expand on what you DO know - viz: try and be get better values for the 200ms and 800ms - things like how many frames, exactly; and is it a precise 1 second repeat. and how dead is dead.. Syncs gone, or changed, ? (etc) Are the syncs precisely correct ? - it's not something like the montior just not quite being able to hold-onto things ? That's many, many frames, so sounds unlike a gate level issue. Then look around at what it is in the system that changes at precisely those times.... -jg
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z