Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On 10 Feb 2006 13:40:44 -0800, fpga_toys@yahoo.com wrote: > >Andy Peters wrote: >> fpga_toys@yahoo.com wrote: >> > rickman wrote: >> > > If your VOIP started dropping packets so that your phone calls were >> > > garbled and the provider said, "of course, we had a hot day, do you >> > > expect to see the same speeds all the time?", would you find that >> > > acceptable? >> > >> > IT HAPPENS!!! Reality Check ... IT HAPPENS EVERY DAY. >> >> Packets are dropped, but for congestion reasons, not because the air >> handling in the switch room set the temperature up a few degrees. > >It happens when the router is unable to keep up with the traffic, aka >congestion. >Doesn't really matter why it's slower than the arrival rate. > >If async can deliver a faster packet processing rate when the >environmentals >are better, then the cogestion point is moved higher for those periods, >unlike >being locked to worst case performance. Have you ever designed a router? Most packets are handled by the data plane. Packets will be dropped if there's congestion even if the logic is infinitely fast, so any argument about async or sync is moot. (However these would typically be designed with sync logic for sound engineering reasons.) A (hopefully small) subset of packets are handled by the control plane, and throughput and latency are improved by having a faster processor. It doesn't matter whether its internal processing is async or sync, as long as it is as fast as possible and it can interface to the various sync devices surrounding it (memories, backplane, etc.). Regards, AllanArticle: 96826
Allan Herriman wrote: > Have you ever designed a router? yes, and no. I've built routers out of comodity parts, but not an ASIC or FPGA one. > Most packets are handled by the data plane. Packets will be dropped > if there's congestion even if the logic is infinitely fast, so any > argument about async or sync is moot. (However these would typically > be designed with sync logic for sound engineering reasons.) Only on current production high end routers. Consumer routers still seem to be processor based, as a good sided FPGA/ASIC for a router core still seems to be out of reach for a $49 wireless retail router or wireless router. > A (hopefully small) subset of packets are handled by the control > plane, and throughput and latency are improved by having a faster > processor. It doesn't matter whether its internal processing is async > or sync, as long as it is as fast as possible and it can interface to > the various sync devices surrounding it (memories, backplane, etc.). Certainly true on larger wire speed routers. But again, only on the high end gear at this point.Article: 96827
panteltje@yahoo.com wrote: > Alex Gibson wrote: > > > > You do not need an fpga for this but > > > > > > With PWM , simple closed loop control and a LC filter can solve your > > > problem. > > > A FPGA adds 7 segment display, a few buttons to adjust voltage manually > > > or > > > even RS-232 control is very feasable. > > > ok you have a fpga then you can make it multiple output power supply. > > > > > > Just for fun add sinusoidal outputs to make it universal.(Again PWM) > > > > > > yusuf > > > > Could do the same with a pic or even a cpld > Well, Sunday, and was just thinking about this a bit, > In ANY case, when using FLASH based ROm or a FLASH based FPGA > (and you will likely want a AD converter and these new Actel FPGAs > have > one build in... then do the PWM..... > * B U T * > We all know FLASH does not hold for ever, I have some PSU that are 20 > years > old and still work fine. > So that begs the question WHAT will happen when a bit goes wrong in the > EEPROM > or FLASH FPGA? > It *could* kick your programable FPGA output to max volts no curent > limit ! > So I think that in case of a FLASH based micro controller, or FPGA (so > not an analog > solution, or ROM based ) one MUST provide a second circuit 'crowbar' > and in this case > 'programmable crowbar'. > > Then from that POV one should actually do the PWM and compare in > analog, and only > use the FPGA output to perhaps set some switches to select voltage > range. > You *can* specify 20 years max usage in your documentation, but if the > thing > blows up 1M$ lab equipment one day later I wonder if they could sue > you. > > As for the OP .. his question implies zero knowledge of FPGA and likely > electronics. > So I referred to monkeys to explain it. > > Sunday .... it is going to be very cold here too this week...... Hi, you mean that flash memory isn't reliable, the same feeling to me. :-) But, when using flash-based fpga, you only need to read the configuration data from flash memory and load them to configuration sram once at the moment of power up. I think this would not a problem even for such critical system. what do you think about it? Best regards, WickyArticle: 96828
Hello all, I have to create a low speed clock (8KHz) out of a high speed one (50MHz) on an Altera FPGA (Cyclone II). It has to be a real clock, not a clock enable. PLL is of course out of question because this frequency is out of its range. Trying to generate it with a simple couter generates many warning, and in some cases Quartus reports that the design does not meet time constraints. I tried to designate the generated signal as a clock, but it doesn't improve the situation. Is there another, "correct" way for doing this? Maybe I should insert some kind of buffer after the counter? I would appreciate your advice. Thanks, Avishay OrpazArticle: 96829
I am routing a pcb with a fpga and adc which has LVDS outputs. I am trying to match the length of the signals. Will I be ok to match them to within 1mm. The max length of any signal will be 33mm and the adc is clocked at 250MHz. Thanks JonArticle: 96830
hi, i am new to this field and studying my under graduate course on FPGAs. to perform experiments practically i am planning to buy one chip. so please suggest me which one among the available FPGAs suit me so that i get in touch with it and also suggest me any softwares available to interface the same with my pc. thanks in advance waiting for responce, bye chaituArticle: 96831
Chaitu The two biggest suppliers of FPGAs Xilinx and Altera both have free software for their lower end device families Spartan and Cyclone respectively. Spartan-3 and Cyclone2 are the latest. Between them they have something like 80-90% of the FPGA market so good to have experience of for finding jobs after graduation. If you are looking to add a chip to one of your own circuits then you will want to look for a package that is easy to mount on a board. None of the modern packages are an easy solder onto a circuit unless you have profession kit especially BGA packages. If you just want to buy a board to use a number of vendors including ourselves have low cost products for students. We even have student based pricing under our UAP scheme. John Adair Enterpoint Ltd. - Home of Raggedstone1. The Low Cost Spartan-3 Development Board. http://www.enterpoint.co.uk <chaitu11311@gmail.com> wrote in message news:1139664844.963424.156820@g47g2000cwa.googlegroups.com... > hi, i am new to this field and studying my under graduate course on > FPGAs. to perform experiments practically i am planning to buy one > chip. so please suggest me which one among the available FPGAs suit me > so that i get in touch with it and also suggest me any softwares > available to interface the same with my pc. > thanks in advance > waiting for responce, > bye > chaitu >Article: 96832
Ray If is of assistance our Broaddown2 product will allow playing at 1.5V on most banks. Needs some solder bridges made but it is built in. We can do it also in a crude fashon on Raggedstone1 on 4 banks. Broaddown4 and Hollybush1 (both showing shortly) will also have some capability as well. John Adair Enterpoint Ltd. - Home of Broaddown2. The Ultimate Spartan-3 Development Board. http://www.enterpoint.co.uk "Ray Andraka" <ray@andraka.com> wrote in message news:kpaHf.45966$bF.17191@dukeread07... > We are considering a change to the IO standard used for the QDR-II > interface (1.5V HSTL Class 1 instead of 1.8V HSTL Class 1 (1.8V)). Xilinx > has not created any demo boards that use the 1.5V interfaces, but they > claim that it should work fine. > > Have any of you completed a Xilinx design that uses the 1.5V interfaces > (for QDR-II) or know of a successful development?Article: 96833
Hi all, I want to use FPGA in control field. I know that uC or DSP processor is a preferable choice to control engineer in general. But in my opinion, the HDL-based logic inside FPGA is more appropriate for those high performance needed but simple function tasks just as the role of a controller in control system, while the C language can realize complicated application. i.e. In my control system, the controller will be based on HDL inside FPGA to get a extremely high performance and high reliability, while the rest of the work, such as communication, data processing and human-machine interface will be based on C language to get a more flexible system. Certainly, the C-based embedded system may be implemented inside FPGA to form a SOC system and may be based on an OS for more convinent. Can anyone give me some advice and papers about this idea? thans a lot. Best Regards, WickyArticle: 96834
Ray, Yes, one of my designs has a QDR with 1.5V HSTL-1 interfacing - no problems. Rog. "Ray Andraka" <ray@andraka.com> wrote in message news:kpaHf.45966$bF.17191@dukeread07... > We are considering a change to the IO standard used for the QDR-II > interface (1.5V HSTL Class 1 instead of 1.8V HSTL Class 1 (1.8V)). Xilinx > has not created any demo boards that use the 1.5V interfaces, but they > claim that it should work fine. > > Have any of you completed a Xilinx design that uses the 1.5V interfaces > (for QDR-II) or know of a successful development?Article: 96835
You can build a 13-bit synchronous and programmable counter,at 50 MHz clock, in any modern FPGA. No problem. Peter Alfke, Xilinx. avishay wrote: > Hello all, > I have to create a low speed clock (8KHz) out of a high speed one > (50MHz) on an Altera FPGA (Cyclone II). It has to be a real clock, not > a clock enable. PLL is of course out of question because this frequency > is out of its range. Trying to generate it with a simple couter > generates many warning, and in some cases Quartus reports that the > design does not meet time constraints. I tried to designate the > generated signal as a clock, but it doesn't improve the situation. > Is there another, "correct" way for doing this? Maybe I should insert > some kind of buffer after the counter? I would appreciate your advice. > > Thanks, > Avishay OrpazArticle: 96836
wicky wrote: > Hi all, > > I want to use FPGA in control field. I know that uC or DSP processor is > a preferable choice to control engineer in general. But in my opinion, > the HDL-based logic inside FPGA is more appropriate for those high > performance needed but simple function tasks just as the role of a > controller in control system, while the C language can realize > complicated application. i.e. In my control system, the controller > will be based on HDL inside FPGA to get a extremely high performance > and high reliability, while the rest of the work, such as > communication, data processing and human-machine interface will be > based on C language to get a more flexible system. > > Certainly, the C-based embedded system may be implemented inside FPGA > to form a SOC system and may be based on an OS for more convinent. > > Can anyone give me some advice and papers about this idea? thans a > lot. > > Best Regards, > > Wicky > Look at the highest sampling rate that will make a difference given the limitations of your plant. If it's 10kHz or lower then you can just code your loop in C, run it on one of the newer DSP chips, and not have to mess around with FPGAs at all. If it's 50kHz or lower then with a slight bit of extra work one of the newer DSP chips can be used with a moderate amount of assembly code and a careful trimming of other functionality. If it's 100-200kHz then one of the newer DSP chips can be used, but the level of work necessary may be greater than the level of work necessary for your FPGA. It's the rare plant that needs a loop sampling rate greater than 5-10kHz. Don't expect the control rule to be all that simple -- one of the advantages of implementing a controller on a processor is that you can wrap it with all sorts of nonlinear frills to take care of corner cases that in an analog system you'd just have to live with, or do everything in your power to avoid. Think hard about the reliability of implementing your controller in an FPGA. Why should 1000 lines of Verilog be more reliable than 200 lines of C? Ask yourself why you are stuck on one particular technology for implementing your control rule. Control rules are just math; you should pick the implementation technology that makes the most sense for the problem at hand, be it a processor, FPGA, vacuum tube or hemp ropes and spinning mahogany drums. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/Article: 96837
On Sat, 11 Feb 2006 07:10:21 -0600, "maxascent" <maxascent@yahoo.co.uk> wrote: >I am routing a pcb with a fpga and adc which has LVDS outputs. I am trying >to match the length of the signals. Will I be ok to match them to within >1mm. The max length of any signal will be 33mm and the adc is clocked at >250MHz. Short answer: yes. Long answer: I'm not quite clear about whether you mean length matching between the various bits coming out of the ADC, or length matching between the P and N signals in an LVDS pair of a single bit. 1. Assume that you are referring to the length match between the bits within the bus. Divide 1mm by the speed of light in epoxy to get the skew between the signals. My mental ALU says this works out to be about 5ps. Note that this is *skew*. You still have to ensure adequate timing margin for each signal in the bus. In this sense, the skew reduces your timing margin, however, if you have to worry about 5ps when your clock has a 4ns period, you have much bigger problems on your hands. 2. Assume you are referring to the length match between the P and N signals in an LVDS pair of an individual bit. Normally for data signals this would be treated as in case 1, however an ADC is sensitive to noise, and a length mismatch may result in a small amount of differential mode to common mode conversion. You might want to do some spice modelling here, however I don't expect you'll have a problem. Also bear in mind that 1mm is probably better than the length match inside the BGA package. (I'm referring to the tracks between the solder balls and the die pads.) If you get really keen, you can determine these lengths, and adjust the the lengths of the traces on your pcb to compensate. I did this once for a couple of 16 bit 622MHz buses. It was a PITA to do, and probably didn't make much real difference (it might have changed the eye width from 73 to 74% or something like that). The PCB guy decided to change to software programming after that project. Since then, we've changed to Expedition PCB from Mentor, and the (new) PCB guy here says length matching including offsets is really easy to do. Another caveat: the velocity factor of a trace differs depending on the layer inside the PCB. Outer layers propagate signals faster due to the lower effective permittivity! This can cause problems with length matching if you are not careful. A good PCB tool will allow you to specify the velocity factor independently for each layer. Regards, AllanArticle: 96838
maxascent wrote: > I am routing a pcb with a fpga and adc which has LVDS outputs. I am trying > to match the length of the signals. Will I be ok to match them to within > 1mm. The max length of any signal will be 33mm and the adc is clocked at > 250MHz. Yes, the apr. 6 pS skew because of the 1mm should be negligible. In fact, I would say you'd still be safe with 10 times as much skew (10 mm length matching). Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------Article: 96839
Thanks for your rapid reply. In fact, just as you said, It's the rare plant that needs a HDL-based controller now, most of them will implemented with a DSP controller and works well. For the reliability, I know that HDL will be more complicated than C language, what i mean is that something in hardware must be stronger than that in software theoretically. As for the problem of code writing of HDL, I think that G-language (graphic language tools such as DSP Builder or System Generator) will decrease the difficulty observably. Of course, they may loss some efficiency, but for the extremely high performance you get in HDL controller, this problem will be less sensitive than that of C language. Now, G-language toos aren't mature enough. They can only generate codes from simulink for some simple algorithm such as PID, if you want to try some advanced alogrithm toolbox in simulink, you must write HDL codes in hand. For the consideration of cost, I think a HDL-based PID controller in a low-cost FPGA such as ep1c6 will get at least a 10 times performance than that of C2000 serials DSP controller of TI. Best regards, WickyArticle: 96840
I checked with a scope after having insterted "spy pins" in the middle of a AF/AM cable --> swing is indeed 0v - 3.6V ."spyied" device was a Webcam, used SW was ampcap. My scope is old and unaccurate , i'm not sure of the exact acceptable level for NON 5V tolerant devices (3.3V? 3.6V?...) When using live video, there are data frame of length ~40 ms & period ~50 ms When "zooming", i can see the USB full speed frames every 1 ms. Theses frames have a duration of ~200 - 250 us when live video is enabled When no video is enabled, the 1ms frame are still threre but no data "mk" <kal*@dspia.*comdelete> wrote in message news:dgbju1hnj345uddqmf0g7ed58ss12gnaga@4ax.com... > On Wed, 8 Feb 2006 07:18:46 +0100, "Jerome" <nospam@nospam.com> wrote: > >>Thanks Eric, >>However concerning the 5V tolerance, i'm not sure it is necessary since >>the >>D+ & D- level swing >>stays in the 0V - 3.6 V range >> >>My first step will consist in measuring these levels with a scope > > Those levels are OK but in USB spec there is a tolerance to short to > 5V requirement which is why it's mentioned. In a controlled > environment you don't need 5V tolerance on the data inputs. >Article: 96841
On 10 Feb 2006 20:00:36 -0800, fpga_toys@yahoo.com wrote: > >Allan Herriman wrote: >> Have you ever designed a router? > >yes, and no. I've built routers out of comodity parts, but not an ASIC >or FPGA one. > >> Most packets are handled by the data plane. Packets will be dropped >> if there's congestion even if the logic is infinitely fast, so any >> argument about async or sync is moot. (However these would typically >> be designed with sync logic for sound engineering reasons.) > >Only on current production high end routers. Consumer routers still >seem to be processor based, as a good sided FPGA/ASIC for a router core >still seems to be out of reach for a $49 wireless retail router or >wireless router. > >> A (hopefully small) subset of packets are handled by the control >> plane, and throughput and latency are improved by having a faster >> processor. It doesn't matter whether its internal processing is async >> or sync, as long as it is as fast as possible and it can interface to >> the various sync devices surrounding it (memories, backplane, etc.). > > >Certainly true on larger wire speed routers. But again, only on the >high end gear at this point. The end customers would describe your hypothetical low end box as a router, but in the context of this discussion (and newsgroup), it's just a general purpose processor with some comms interfaces with some software that gives it some meaning. In that context, I agree that a faster processor is a good thing, and IF an async processor will give better performance (either in terms of throughput, latency or power) on average, then it is also good. Please bear in mind that most comms interfaces (e.g. Ethernet) are synchronous in nature (at least at the physical layer). One has real time requirements to meet. Synchronous design makes a lot of sense for a hardware router or switch, etc. (As it does for the products we design here, but I won't bore you with the details.) Regards, AllanArticle: 96842
How about using some resistors to reduce the swing, then use a differential input of the FPGA ? Jerome wrote: > I checked with a scope after having insterted "spy pins" in the middle of a > AF/AM cable > --> swing is indeed 0v - 3.6V ."spyied" device was a Webcam, used SW was > ampcap. > > My scope is old and unaccurate , i'm not sure of the exact acceptable level > for NON 5V tolerant devices > (3.3V? 3.6V?...) > > When using live video, there are data frame of length ~40 ms & period ~50 ms > When "zooming", i can see the USB full speed frames every 1 ms. > Theses frames have a duration of ~200 - 250 us when live video is enabled > > When no video is enabled, the 1ms frame are still threre but no data > > > "mk" <kal*@dspia.*comdelete> wrote in message > news:dgbju1hnj345uddqmf0g7ed58ss12gnaga@4ax.com... > >>On Wed, 8 Feb 2006 07:18:46 +0100, "Jerome" <nospam@nospam.com> wrote: >> >> >>>Thanks Eric, >>>However concerning the 5V tolerance, i'm not sure it is necessary since >>>the >>>D+ & D- level swing >>>stays in the 0V - 3.6 V range >>> >>>My first step will consist in measuring these levels with a scope >> >>Those levels are OK but in USB spec there is a tolerance to short to >>5V requirement which is why it's mentioned. In a controlled >>environment you don't need 5V tolerance on the data inputs.Article: 96843
Roger wrote: > Ray, > > Yes, one of my designs has a QDR with 1.5V HSTL-1 interfacing - no problems. > > Rog. > > "Ray Andraka" <ray@andraka.com> wrote in message > news:kpaHf.45966$bF.17191@dukeread07... > >>We are considering a change to the IO standard used for the QDR-II >>interface (1.5V HSTL Class 1 instead of 1.8V HSTL Class 1 (1.8V)). Xilinx >>has not created any demo boards that use the 1.5V interfaces, but they >>claim that it should work fine. >> >>Have any of you completed a Xilinx design that uses the 1.5V interfaces >>(for QDR-II) or know of a successful development? > > > Thanks, that's what I needed to know. A customer is making a custom board and wanted to know that the 1.5v HSTL worked before committing to it.Article: 96844
Thanks for everyones help, very usefull info. JonArticle: 96845
rickman wrote: > So the async design likely must have larger margins added to the design > of the handshake path and the result is it will have a slower maximum > speed compared to a sync design. Simply not true for all async designs, especially those that generate the ack function from the outputs of the local gates -- in those cases there are no "margins" added at all, as it is implictly not necessary by the design of the logic. This works well for fast routing and low fanout, where gate delays are small and routing delays are small. For larger designs, where the global clock skew rapidly exceeds these timings, there is significant gain. 10 year old Phased Logic (ack encoded as phase), and other async designs with ack based on logic outputs (rather than a separate timing path) are a good example: http://www.erc.msstate.edu/mpl/projects/phased_logic These designs compete well where global clock skew and environmental margins greately exceed the typical timing of a logic element and short route delay.Article: 96846
rickman wrote: > Hey, if there is real data out there showing me how this works and that > it is clearly better, fine. I'm just saying this is not that sort of > data. > > <snip> > > I'm not doubting the data, I'm doubting the comparison. Do you see the > difference? ( but above ) : "I'm just saying this is not that sort of data." I'll leave others to decode that, I'm lost.. >> Who claimed this was easy ? It's what they must have done >>in the tools area, that impresses me as much as the >>(claimed) silicon results. > > > Yes, I am sure it was a lot of work and that is part of my concern with > it. But as long as it is *their* work, if they start making chips that > solve my system problems better than other chips, then I'll use them. > But this chip actually runs slower max speed in the same process. Did > you notice that? The clocked processor runs up to 100 MHz, IIRC while > the async processor was only 77 MHz room temp! There is another, perhaps clearer press release here : http://www.eet.com/news/design/showArticle.jhtml;jsessionid=RZAVAFCAIVCMGQSNDBECKHSCJUMEKJVN?articleID=179103395 Here, the key comparison is "at equivalent performance the ARM996HS consumed a factor of 2.8 less power than the ARM968E-S, or 36 percent, according to simulation benchmark data from Handshake Solutions" as I have said before, many designers will grab that with both hands. If this is offered as FAB ready, those designers will not actually care about the details. and they also say "the ARM996HS is being promoted for its low electromagnetic footprint, another benefit of clockless performance which would make the processor core suitable for automotive and mixed-signal applications." Better EMC is also not to be sneezed at... but I liked this comment too - Seemed very relevent... " Richard York, ARM’s ARM996HS product manager, said ARM would not rush to introduce clockless versions of other cores. “It [asynchronous logic] will take some time to become widely accepted because it is very different,” he told EE Times. " and " “A self-timed Cortex M3 would be a fascinating product but we want to see how this product goes in the market first.” " I look forward to the silicon, both ARM and 80C51 versions. -jgArticle: 96847
John Adair wrote: > If you are looking to add a chip to one of your own circuits then you will > want to look for a package that is easy to mount on a board. None of the > modern packages are an easy solder onto a circuit unless you have profession > kit especially BGA packages. If you just want to buy a board to use a number > of vendors including ourselves have low cost products for students. We even > have student based pricing under our UAP scheme. Actually, if looking to add an FPGA chip to one of your own designs, BGA is by far the easiest for a hobbiest. Hand soldering a 100-200 pin QFP varient package is a nightmare for a hobbiest without a stereo microscope and a very fine low temp iron, and lots of aquired skill. Even stenciling it is a nightmare, as it takes accurate placement to avoid smearing the paste. There are several tricks for prototype BGA assembly which work well for a hobbiest, and a low volume commercial proto lab as well. Solder paste for attachment is a super pain, as it requires accurate placement to avoid smearing the paste - nearly impossible by hand without a placement fixture. What works much better is to manually wet all the BGA pads on the PCB by coating the PCB with ample water soluable flux and dragging a large solder ball across all the pads with a fat "screwdriver" soldering temp on a temp controlled iron -- do not linger, to avoid heat damage to the pcb or device. Do this until all the pads have a uniform solder bump and are shinny. This works equally well for cleaning up a BGA device prior to reballing using the SolderQuik preforms. Then wash off the expended flux, dry and bake to remove any moisture trapped by the cleaning process to avoid popcorning pads off. Next, apply a liberal amount of water soluable flux to the pcb and BGA balls, and carefully mate the balls to the pads by eye. If you have a device outline on the pcb that is accurately registered to the pads, this helps. (either copper, soldermask or silkscreen). It sometimes help to bring two ball pads on the outside edge of the board to a land large enough to touch with the soldering iron, which allows fusing a ball on opposite sides/corners of the package to hold the device in place during handling for assembly. Ditto for QFP packages -- prewetting the pads and leads with an iron, and then just fluxing for assembly is sometimes easier than accurate placement with paste - as you can always drag the iron with a little dangling solder across the leads that didn't quite wet/attach with a fillet well enough. If you have a toaster oven, take some time to calibrate it, and get a temp stick. http://www.pcbexpress.com/stencils/stencil_article_page1.htm Stencils for all the small parts which can be easily hand placed is a great idea too. While I use a couple Chipmaster SMD-1000's in my lab, I've also use my wifes table top convection oven and a toaster oven for demonstrations for local hobbiests. I suggest preheating the oven/toaster to about 325F and setting the project board into the preheated oven, then cycle the oven/toaster up above soldering temp by 15-25 degrees and leave it there about 2-3 times longer than the traditional temp profile of a commercial reflow oven as there isn't quite as much energy available to heat the board and parts to temp as quickly. This is where a temp stick is useful, but not required. It will generally take a few boards before you find exactly the perfect process that leaves all the parts properly wetted and attached with minimum heat and process time. For BGA's this is easy to confirm, by making sure the balls all have a uniform "squat" which confirms the inside balls came to temp, and wetted, allowing the package to fall slightly and left the balls slightly compressed in shape. All other parts take a good magnifying glass or lab scope, and verify the wetting left strong fillets on all the connections. Where the fillet is obvious, but lacking in volume, take note as that pad needs a larger paste mask opening next time. Reballing BGA packages can be done the same way. But another process is sometimes easier. That is to use a digital hot plate, or heavy electric skillet and an inexpensive non-contact IR thermometer. Set the BGA in the reballing frame on the preheated surface which is just over soldering temp, and cover with a shinny preheated lid, and turn the heat source off. The mass in the hot plate or skillet will bring the BGA balls to temp, then cool back below plastic state. After a few minutes remove the lid partially (lift one edge) to allow the device to cool, but not hard shock it with cold air. Then remove and place on a warm surface to further cool.Article: 96848
I have the following code: signal busy_condition : std_logic; signal high_registered : std_logic_vector(1 downto 0); signal high_current : std_logic_vector(1 downto 0); busy_condition <= '1' WHEN (CONV_INTEGER(high_current) > CONV_INTEGER(high_registered)) ELSE '0'; During simulation high_registered become "10" and high_current = "00". However busy_condition is high impedence. Is there someone that can explain me what is my error? Thanks a lo GioArticle: 96849
wicky wrote: > For the reliability, I know that HDL will be more complicated than C > language, what i mean is that something in hardware must be stronger > than that in software theoretically. As for the problem of code writing > of HDL, I think that G-language (graphic language tools such as DSP > Builder or System Generator) will decrease the difficulty observably. > Of course, they may loss some efficiency, but for the extremely high > performance you get in HDL controller, this problem will be less > sensitive than that of C language. With good sized FPGA's falling in price, especially those with processor cores or big enough for a soft processor core, it's actually not that bad a design decision, and should just get better with time. Consider Celoxica (Handel-C), Impulse (Streams-C) or even FpgaC as both an HDL and an HLL for a lot of projects which might migrate from microprocessors/DSP to FPGA.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z