• 2015년 3월
  • Techtrend

    Home / Archive by category "Techtrend" (Page 38)

    Article

    Renewable Transition 2: EROEI Uncertainty

    In the first part of this series, I discussed the practicality of a future transition from fossil fuels to renewable energy sources—specifically renewable sources of electricity such as solar and wind power. One little-discussed hurdle is the fact that, because we must invest energy in renewables up front, a rapid transition threatens to greatly impact near-term demand for energy resulting in unwanted economic and political effects. Another is that, because we will initially use fossil fuels to build our renewable infrastructure, the transition to renewables will result in a short-term increase in carbon emissions. The extent to which both of these impacts will be significant, even their potential to foreclose the possibility of such a transition, will turn on the net energy, or Energy Return on Energy Invested (EROEI), of available renewable energy technology.

    As I alluded to last time, while there are many EROEI numbers floating about for solar, wind, etc., these numbers are far less accurate or verifiable than is, I believe, commonly assumed. I’ll argue that our measurements of EROEI are fundamentally flawed, at least for some purposes. Most EROEI studies serve as a tool to compare different technologies or to gauge advances in technology–a role for which they are generally well suited. However, when viewed from a complete systems perspective, current EROEI figures fail to provide an inclusive measurement. I’ll argue that, for purposes of planning a civilizational transition, a meaningful meansure must be inclusive of all energy inputs. Finally, I’ll propose a possible proxy-measurement to address the methodological issues surrounding EROEI.

    “Conventional” EROEI vs. “systemic” EROEI measurements: I’ve been fairly candid with my critique of conventional EROEI measurements, even suggesting that many such measurements are more accurately characterized as marketing copy than empirical, verifiable measurements. This is perhaps a bit unfair–the core of my critique is that these conventional EROEI measurements, while valuable and perhaps even accurate for some purposes, are wholly inadequate to measure the systemic implications of a transition to these alternatives. Here, to assist this critique, I’ve divided EROEI measurements into two broad categories.

    What I’m calling “conventional” EROEI measures use an artificial boundary to simplify their accounting by excluding energy that, while certainly an input, is several steps removed from the direct manufacture of the renewable. This includes standard input-output analysis, process analysis, and hybrids of these two. This type of EROEI estimate (as it must fairly be called) seems to have utility in two areas: 1) comparing the relative EROEI of similar renewables (for example, two different turbine designs), and 2) measuring the progress of design advances (for example, the effect of improving the design of one given type of turbine).

    The second type of EROEI is what I’m calling “systemic” EROEI. While I think this terminology is self-explanatory, here I mean a complete system-wide measure of all outputs compared to all inputs. The value of such a measure is in determining the viability of such technologies to support human civilization as a whole, to sustain certain levels of growth (or contraction), etc.

    The problem with calculating EROEI: Why the need for two sets of EROEI calculations? Why not just use one fundamentally “true” measurement methodology and call it a day? The answer is that measuring EROEI is far more challenging than is commonly presumed because of (among several reasons) the following question: how attenuated an energy input is necessarily included in our calculations? Certainly the electricity and natural gas used in a turbine manufacturing plant must be included. What about the energy used to build that plant? What about the energy used to build the machinery used to build that plant? What about the energy used to build the plant to build that machinery, ad infinitum? This is just the tip of the iceberg, but already you can see where this is going: we must draw an artificial boundary if we hope to actually count these energy inputs, but by so doing we necessarily exclude a portion of the actual energy inputs—inputs the significance of which are unknown and unknowable (because we can only know their significance by actually counting them—which brings us back to our initial problem). The outcome of these methodologies, while admittedly the result of actual counting of measurable inputs and outputs, remains but an estimate.

    Are these excluded inputs inconsequential? Do we really need to count the energy used to harvest the grains used to feed the longshoreman that loaded the component ores on a dock in Asia as an energy input to the turbine parts that were produced from that ore? And what is the aggregate impact of these attenuated inputs? First, I suggest that we do not and cannot know, as argued above. See the figure below:

    EROEI_order.jpg

    Figure 1: The different theories about the importance of “long-tail” inputs are compared above. In both cases (red and green lines), the total energy input is represented by the area under the curve. In the green line model (the view reflected by most current EROEI figures), the vast majority of energy input is accounted for in very proximate use (e.g., the energy consumed in the turbine assembly line, the energy used to transport the finished turbines, etc.). In the red line model (an inclusive view that I advance here), attenuated energy inputs are much more significant (e.g., the proportion of energy used to build the mining machinery to mine the ore for the metals used in the machine tools in the turbine assembly line). If, as I argue, the red line model is at all accurate, then the inability of current EROEI calculation methodologies is a fatal flaw, at least to the extent that we are looking for an inclusive number to answer systemic questions.

    Only one empirical study, to my knowledge, has attempted to calculate an inclusive EROEI, and even that study assumes that, where 10% of our economy spent on energy is used to fuel the other 90%, that this other 90% is in no way a prerequisite to the energy production. See What is the Minimum EROI that a Sustainable Society Must Have? by C. Hall, et al. This study suggests that, where the initial “well-head” EROEI is 20:1, after the necessary supporting infrastructure of society is accounted for the EROEI drops to 3:1. This is a drop from “conventional” to “systemic” EROEI of nearly an order of magnitude.

    This calculation of a truly inclusive, systemic EROEI for renewable energy sources stands at the very core of our society’s ability to transition to renewable energy. Compare, for example, results of input-output vs. process-analysis EROEI figures from existing studies of wind EROEI at Meta-analysis of net energy return for wind power systems by Kubiszewski, et al. Two brief quotations are in order, first at pp. 2-3:

    The choice about system boundaries is perhaps the most important decision made in net energy analysis, and, for that matter, in other analytical approaches as well. One of the most critical differences among the diverse studies is the number of stages in the life cycle of an energy system that are assessed and compared against the cumulative lifetime energy output of the system.

    and at p.7:

    Studies using the input-output analysis have an average EROI of 12 while those using process analysis an average EROI of 24. Process analysis . . . may be prone to the exclusion of certain indirect costs compared to the input-output analysis.

    Input-output studies (which tend to be “more comprehensive,” including more attenuated inputs, but certainly still only the “front” of the long-tail) averaged only 50% the EROEI figure of the process-analysis EROEI figures for similar technologies under roughly similar conditions. If inclusion of a small portion of this long-tail can reduce EROEI by as much as 50%, it is at least possible–I would argue likely–that inclusion of the full “long tail” will make “systemic” EROEI as much as an order of magnitude lower than “conventional” EROEI measures.

    Ultimately, energy can neither be created nor destroyed. As a result, in any closed system, the energy flows within that system must come into unity. The more pertinent question here may be “what artificial boundary to draw when considering questions that affect human society as a whole?” I argue that, ultimately, we must draw the boundary at our planet, a system that, at least on human-relevant time scales, tends to operate in relative stasis given the continuous input of solar energy. As a result, EROEI of civilization must balance out to roughly 1:1 + the rate of growth of human society. While that may seem like a tautology at first, and is used by some to argue that “systemic” or “inclusive” EROEI measures such as those I suggest here are pointless, I think the reverse is true–while 0.8 or 1.2 may seem like minor differences, they fundamentally represent the difference between a shrinking global civilization (and quite possibly a declining foundation of ecological support and resiliency) or one that is growing.

    We have been able to expand and grow our global civilization based, recently, on savings of “ancient sunlight” accumulated over geological time. We have empirical proof that the EROEI of these sources was significantly greater than 1 due to the sustained growth of human civilization. Now, any attempt to replace that vast inheritance with renewable technologies must address that same systemic question: when ALL the energy inputs are considered, will civilization have the energy to expand energy, maintain, or reduce the energy consumed per capita? The answer to that question will largely guide the future of humanity–as such it is critical for us to understand if the “systemic” EROEI of modern renewable energy technologies are actually, as I suggest, an order of magnitude lower than advertised.

    As a quick thought exercise–and even if you only consider it a slight possibility that systemic EROEI is actually an order of magnitude lower than the numbers floating around–consider the impact on the transition from issues raised in the first post in this series (boot-strapping burden and carbon front-loading) . . .

    I would also like to address one attempt to reconcile this problem: Howard Odum’s “emergy” concept. While I applaud his recognition of this problem, and his efforts to address it, “emergy” really doesn’t address the accounting impossibility highlighted above. While “emergy” recognizes the need to account for all energy inputs, it provides no methodology to get around the process of actually counting them, as we regress infinitely step-by-step back from the assembly line itself. As a result, “emergy” calculations must either draw an artificial boundary somewhere (resulting in the same long tail of unknown significance) or must resort to mere guesses about the inputs. (I recognize that Odum’s “emergy” also addresses the cost of transformation between different energy qualities–this doesn’t eliminate the problem caused by the “long tail” of energy inputs where such transformation must be considered, and where Odum presents no accounting theory or proxy measurement methodology to measure these inputs in aggregate.)

    Price-Estimated EROEI: I have proposed that, in order to calculate “systemic” EROEI, we must use some sort of proxy-calculation that gets around the accounting impossibility highlighted above. While I’ve reviewed several options, the only one that seems workable is what I’ve called the price-estimated EROEI method.

    In the price-estimated EROEI methodology, we attempt to use the price mechanism to account—by proxy—for this long tail of energy inputs. The basic calculation is quite simple: convert the financial cost to build and maintain the system into units of the same energy produced by the system and then compare to the amount of energy that the system will produce over its expected lifespan. I’ve gone through two applications of this methodology below. I’d like to be the first to recognize that there are very significant concerns with this methodology. Just to name a few: inaccuracy caused by the differing energy value of the input energy type actually used compared to the output energy type; price distortions caused by currency fluctuations, market inefficiencies, and market failures; the unaccounted for externalities of the actual inputs, especially fossil fuels, compared to the often fully internalized equivalent in the clean renewable energy produced.

    While there are many legitimate criticisms of the price-estimated EROEI method (some listed above), one of the more frequent criticisms is, in my opinion, unfounded, and should be rebutted. Many people have suggested that the energy used, for example, to feed and house a person involved in the production of, say, wind turbines, shouldn’t be counted because that person would need to be housed and fed whether or not she was involved in turbine production. This critique is overly simplistic: the reason that this energy input must be counted is because of the concept of opportunity cost. If our wind turbine worker was not involved in that process, she could be involved in another energy-producing activity. Therefore, because she must give up these alternatives in order to work on wind turbines, this energy is accurately accounted as an input in wind-turbine production. Additionally, because price-estimated EROEI is an attempt to calculate the systemic EROEI, we must consider that, if this energy is not accounted for, we may be assuming that society can support a component worker that, in fact, cannot be supported and will be “cut” through population shrinkage (“die off”) and economic contraction.

    Example price-estimated EROEI calculation for solar photovoltaics: LA Solar PV Installation: This 2009 installation is my example for price-estimated EROEI calculation. I think it’s a good example (no example is perfect) for several reasons: at 1.2 MW, it’s modest in size, but large enough to reap economies of scale; because it is installed on an existing roof space, there is no land cost associated with the installation (that, in some circumstances, could present acquisition costs or environmental compliance/impact statment costs not truly representative of net energy issues); because it is in California, where the average cost of electricity (and especially peaking “sunny day” electricity that solar provides) is higher, it will provide a more conservative estimate; because it is located in the downtown of a major metropolitan area it will not require significant transmission investment to provide a true measure, and is therefore also more conservative. Finally, there are good cost and output numbers available for the site. Basic data: 1.2 MW array installed 2009 in Los Angeles, cost $16.5 million up front (ignoring rebates/tax credits/incentives), projected financial return of $550,000 per year. At the rough California rate of $.15 per KWh, that’s about 4 GWh per year (conservative). Price-Estimated-EROEI Calculation: The $16.5 million up-front is, at $0.09/KWh (here using national average, as there’s no reason to think that manufacturers would use primarily California peaking power to build this system), an input of 183 GWh through installation (I’m ignoring the realtively small maintenance costs here, which will also make the figure more conservative). If we assume a life-span of 40 years, then the energy output of this system is 160 GWh. That’s a price-estimated EROEI of 0.87:1.

    Example price-estimated EROEI calculation for wind: I’ve had a more difficult time finding a recent wind project where clear data (on both cost and actual, as opposed to nameplate, output) is readily available. As a result, I’ve chosen a 2000 Danish offshore wind project at Middelgrunden. While up-front expenses may be higher off-shore (making the resulting EROEI here more accurate for offshore projects than on-shore), I think this is a relatively modern installation (2MW turbines). If readers have more current projects with full data, please provide in the comments–another point for investigation is whether the price-estimated-EROEI of solar and wind have been improving or if they are holding relatively stead. Basic data: Cost of $60 million, annual energy ouput 85 GWh. Price-Estimated-EROEI Calculation: At the US national average rate for electricity ($0.09/KWh), the $60 million up-front energy investment works out to 666 GWh. Using a life-span of 25 years (and assuming zero maintenance, grid, or storage investment, making the result artificially high), the energy output comes to 2125 GWH. That’s a price-estimated-EROEI of 3.2:1.

    Again, these are just representative samples, and I recognize the weaknesses and uncertainties with this model. However, I must ask two questions. First, if there are market or price inaccuracies internal to these calculations that make them inaccurate, how can they be explained? For example, if it’s inaccurate to use the price of a unit of energy outputted as the cost of energy input, why hasn’t the market addressed this? Second, recognizing these inaccuracies, how do we know whether this measure is more or less inaccurate than “conventional” EROEI measures? We cannot definitively characterize the uncertainties inherent in price-estimated EROEI, nor can we definitively characterize the significance of the unaccounted for energy in “long-tail” of conventional EROEI measurements, so we have little basis to say that one measure is more accurate than the other. We can only say with high confidence that “conventional” EROEI is some degree higher than an inclusive “systemic” EROEI—how much higher we do not know. But if the very high (40:1, 70:1, etc.) figures sometimes floated for the EROEI of renewables is accurate, how can we explain the inability to monetize this value?

    This fundamental uncertainty does not render the discussion pointless. I think that we can say with confidence that existing EROEI measures do not answer one question that is critical to the continuation of civilization as we know it: do renewable energy systems like wind, solar, tidal, or geothermal power have sufficiently high EROEI to facilitate a transition away from fossil fuels? This leads us to the precautionary principle which, crudely summarized, states that where the potential impact is significant and we have insufficient confidence to choose between two future scenarios, prudence demands that we plan for the more pessimistic. This certainly seems to be the case here: the prospects for “transition” look starkly different at “systemic” EROEI values of 40:1 vs. 4:1. In this vein, and in the final post in this series, I will explore the significance of EROEI uncertainty and our path forward in light of this uncertainty.

     

    EROEI_order.jpg [File Size:18.4KB/Download:19]

    Mahle’s 1.2 L demonstrator shows potential of aggressive downsizing

    Mahle’s 1.2 L demonstrator shows potential of aggressive downsizing

    6637_6636_ART.jpg

    Downsizing rates of up to 50% are feasible. That is one Mahle conclusion from developing and testing its demonstrator engine with 1.2 L displacement and two-stage turbocharging.

    According to Professor Heinz Junker, the hybrid-electric vehicle (HEV) and the downsized combustion engine have more in common than is sometimes perceived. During a technical briefing at Stuttgart, the Mahle CEO and Chairman of the Management Board said, “Both approaches seek to shift the dominant load area to a more efficient part of the map. After all, the electric motor does little else than to boost a small combustion engine in the lower rev band where the combustion engine alone cannot provide the required driveability.”

    Pure electric driving, which is often advertised as an advantage of full hybrid vehicles, is an option that will only go so far, Junker said: The high overall weight of a full HEV dramatically limits its pure electric reach.

    This analysis may be backed up by a recent trend of HEV manufacturers, such as Toyota, to install larger combustion engines in new HEV model generations to improve the long-distance driving behavior and fuel consumption. The 2009 Prius HEV, for instance, is equipped with a 1.8-L gasoline engine rated at 98 hp (73 kW) which has replaced the earlier generation’s 1.5-L engine.

    The true bottom-line effect of the electric motor and its contribution via recuperation of braking energy is fairly small, according to Junker. “If you take a naturally aspirated gasoline engine as reference, an aggressively downsized gasoline engine alone can improve the fuel efficiency by up to 40%,” he explained. “If you consider the additional 5% gain that may be contributed by the electric motor of a full HEV, it is clear that downsizing offers the biggest single potential.”

    Economically, it makes sense as well, Junker noted, since the total additional cost for a gasoline downsizing efficiency gain in the 30% league “is between 2000 and 3000 euro. In a full HEV, this does not even pay the traction battery.”

    Aggressive downsizing of up to 50%

    To underpin this ambitious statement, Mahle has developed a radically downsized gasoline engine that squeezes 144 kW (around 200 hp at 6500 rpm) and up to 287 N•m (between 2500 and 3000 rpm at 30 bar brake mean effective pressure (bmep) out of a three-cylinder 1.2 L gasoline powerplant. It is designed to replace a 2.4-L engine in a family size car with a curb weight of up to 1.6 tons. This demonstrator power pack was designed to meet the Euro 5 emissions standard and is equipped with key enabling technologies the supplier has to offer.

    To overcome the current limits to downsizing that are in the 30% area, the weight-optimized demonstrator engine is equipped with a low-friction variable valvetrain, featuring Cam-in-Cam technology, split cooling, friction optimized power cell, controlled oil pump, lightweight valves with interior cooling, high-load exhaust gas recirculation (EGR), and two turbochargers.

    The specific stationary consumption of the engine is as low as 295 g/kWh at 2000 rpm and 4 bmep. At its optimum point, the three-cylinder engine consumes a mere 234 g/kWh. This compares to the average specific consumption of standard downsized gasoline engines, which is often in the span between 360 and 390 g/kWh.

    Maximum EGR strategy

    “EGR, in particular, is an enabling technology for downsizing,” says Junker, “as it helps to bring down NOx emissions and fuel consumption. With high EGR rates, there is no need for gasoline enrichment to protect components from too much heat at full load.”

    Stoichiometric operation at full load, thanks to very high EGR rates, can save up to 10% of fuel in a highly charged gasoline engine,” he noted. “Just as importantly, very high EGR rates can help to avoid the need for a selective catalytic reduction system,” explained Jörg Rückauf, Mahle’s Director of Research & Advanced Engineering.

    Currently, one threshold for very high EGR rates is a lack of exhaust pressure in parts of the engine map. If the air pressure in the intake duct, for instance, is higher than the exhaust pressure, high EGR rates will simply be impossible to achieve, the Mahle engineer noted. Hitherto solutions, based on throttle valves, increase throttle losses and bring down the gasoline engine’s efficiency. Mahle avoids this side effect by installing a fast rotating charge air valve (called SLV) that only closes the intake duct for the shortest of moments and, thus, briefly decreases the pressure downstream where the exhaust gas inlet is located.

    By this momentary effect, it becomes possible to use high EGR rates despite comparably low exhaust gas pressure. The rotating flap motion is electronically synchronized with the cylinder movement to maximize the benefit.

    “As there is no permanent throttle effect, the rotating air valve does not affect fuel consumption,” Rückauf explains.

    Controlled oil pump

    Another new strategy for increasing the efficiency of a combustion engine is to reduce the losses in ancillary components. By controlling the volumetric flow of an oil pump, for instance, the component will not always run along with the engine speed. This makes perfect sense, as the engine’s oil demand levels off from a certain speed.

    “Depending on the chosen control strategy, an oil pump can, therefore, contribute up to 2% better fuel efficiency in the New European Driving Cycle,” said Dr.-Ing. Uwe Mohr, Vice President Corporate Research, Advanced Engineering & Services. To exploit this potential, the supplier uses its patented controlled pendulum slider oil pump.

    Compared to conventional rotary vane pumps, the pendulum slider pump is claimed to have a 10% better efficiency – and shows dramatically less friction. This is due to the principle of operation: Compared with vane pumps, the individual pump cells of the pendulum slider pump are not sealed via sliding friction but by the rolling motion of pendulums in grooves.

    “As a consequence to the low wear, the pump maintains its high efficiency over the complete lifetime,” said Mohr.

    Currently, the supplier is preparing to use the pendulum slider pump for a map-controlled oil pump application in a passenger car.

    Joerg Christoffel

     

    6637_6636_ART.jpg [File Size:34.1KB/Download:16]

    Gasoline-diesel ‘cocktail’: A potent recipe for cleaner, more efficient engines

    Gasoline-diesel ‘cocktail’: A potent recipe for cleaner, more efficient engines

    MADISON — Diesel and gasoline fuel sources both bring unique assets and liabilities to powering internal combustion engines. But what if an engine could be programmed to harvest the best properties of both fuel sources at once, on the fly, by blending the fuels within the combustion chamber?

    The answer, based on tests by the University of Wisconsin-Madison engine research group headed by Rolf Reitz, would be a diesel engine that produces significantly lower pollutant emissions than conventional engines, with an average of 20 percent greater fuel efficiency as well. These dramatic results came from a novel technique Reitz describes as “fast-response fuel blending,” in which an engine’s fuel injection is programmed to produce the optimal gasoline-diesel mix based on real-time operating conditions.

    Under heavy-load operating conditions for a diesel truck, the fuel mix in Reitz’ fueling strategy might be as high as 85 percent gasoline to 15 percent diesel; under lighter loads, the percentage of diesel would increase to a roughly 50-50 mix. Normally this type of blend wouldn’t ignite in a diesel engine, because gasoline is less reactive than diesel and burns less easily. But in Reitz’ strategy, just the right amount of diesel fuel injections provide the kick-start for ignition.

    “You can think of the diesel spray as a collection of liquid spark plugs, essentially, that ignite the gasoline,” says Reitz, the Wisconsin Distinguished Professor of Mechanical Engineering. “The new strategy changes the fuel properties by blending the two fuels within the combustion chamber to precisely control the combustion process, based on when and how much diesel fuel is injected.”

    Reitz will present his findings today (Aug. 3) at the 15th U.S. Department of Energy (DOE) Diesel Engine-Efficiency and Emissions Research Conference in Detroit. Reitz estimates that if all cars and trucks were to achieve the efficiency levels demonstrated in the project, it could lead to a reduction in transportation-based U.S. oil consumption by one-third.

    “That’s roughly the amount that we import from the Persian Gulf,” says Reitz.

    Two remarkable things happen in the gasoline-diesel mix, Reitz says. First, the engine operates at much lower combustion temperatures because of the improved control — as much as 40 percent lower than conventional engines — which leads to far less energy loss from the engine through heat transfer. Second, the customized fuel preparation controls the chemistry for optimal combustion. That translates into less unburned fuel energy lost in the exhaust, and also fewer pollutant emissions being produced by the combustion process. In addition, the system can use relatively inexpensive low-pressure fuel injection (commonly used in gasoline engines), instead of the high-pressure injection required by conventional diesel engines.

    Development of the blending strategy was guided by advanced computer simulation models. These computer predictions were then put to the test using a Caterpillar heavy-duty diesel engine at the UW-Madison Engine Research Center. The results were “really exciting,” says Reitz, confirming the predicted benefits of blended fuel combustion. The best results achieved 53 percent thermal efficiency in the experimental test engine. This efficiency exceeds even the most efficient diesel engine currently in the world — a massive turbocharged two-stroke used in the maritime shipping industry, which has 50 percent thermal efficiency.

    “For a small engine to even approach these massive engine efficiencies is remarkable,” Reitz says. “Even more striking, the blending strategy could also be applied to automotive gasoline engines, which usually average a much lower 25 percent thermal efficiency. Here, the potential for fuel economy improvement would even be larger than in diesel truck engines.”

    Thermal efficiency is defined by the percentage of fuel that is actually devoted to powering the engine, rather than being lost in heat transfer, exhaust or other variables.

    “What’s more important than fuel efficiency, especially for the trucking industry, is that we are meeting the EPA’s 2010 emissions regulations quite easily,” Reitz says.

    That is a major commercial concern as the bar set by the U.S. Environmental Protection Agency is quite high, with regulations designed to cut about 90 percent of all particulate matter (soot) and 80 percent of all nitrogen oxides (NOx) out of diesel emissions.

    Some companies have pulled from the truck engine market altogether in the face of the stringent new standards. Many other companies are looking to alternatives such as selective catalytic reduction, in which the chemical urea (a second “fuel”) is injected into the exhaust stream to reduce NOx emissions. Others propose using large amounts of recirculated exhaust gas to lower the combustion temperature to reduce NOx. In this case, ultra-high high-pressure fuel injection is needed to reduce soot formation in the combustion chamber.

    Those processes are expensive and logistically complicated, Reitz says. Both primarily address cleaning up emissions, not fuel efficiency. The new in-cylinder fuel blending strategy is less expensive and less complex, uses widely available fuels and addresses both emissions and fuel efficiency at the same time.

    Reitz says there is ample reason to believe the fuel-blending technology would work just as well in cars because dual dual-fuel combustion works with lower-pressure and less expensive fuel injectors than those used in diesel trucks. Applying this technology to vehicles would require separate tanks for both diesel and gasoline fuel — but so would urea, which is carried in a separate tank. The big-picture implications for reduced oil consumption are even more compelling, Reitz says. The United States consumes about 21 million barrels of oil per day, about 65 percent (13.5 million barrels) of which is used in transportation. If this new blended fuel process could convert both diesel and gasoline engines to 53 percent thermal efficiency from current levels, the nation could reduce oil consumption by 4 million barrels per day, or one-third of all oil destined for transportation.

    Computer modeling and simulation provided the blueprint for optimizing fuel blending, a process that would have taken years through trial-and-error testing. Reitz used a modeling technique developed in his lab called genetic algorithms, which borrow some of the same techniques of natural selection in the biological world to determine the “fittest” variables for engine performance.

    ###

    The work is funded by DOE and the UW-Madison College of Engineering Diesel Emissions Reduction Consortium, which includes 24 industry partners.

    — Brian Mattmiller, 608-890-3004, bsmattmi@engr.wisc.edu

    원문출처 : http://www.eurekalert.org/pub_releases/2009-08/uow-ga073109.php

    Everything but the engine

    Everything but the engine
    by Bruce Morey

    While an economical engine is the heart of a fuel-efficient vehicle, all other components contribute as well.
    From tires to body styles, here is a look at vehicle technologies that save fuel.

    Detailed information is introduced in attached PDF file.

    원문출처: July 2009 aei

     

    File0002.PDF [File Size:3.65MB/Download:21]

    Then & now

    Then & now

    by Roger Blanchard

    Recently a friend gave me a copy of a January 22, 1973 issue of Newsweek. The cover title was “The Energy Crisis”. It’s interesting to look back and see how things have changed; or, to be more accurate, not changed.

    Technological optimism prevailed back then as it does today. Here are a few excerpts from the article concerning nuclear power:

    “Any crisis policy would eventually be doomed, of course, unless technology produces important new energy sources. And, happily, the outlook looks brighter after the ‘70s. By the mid-1980s, nuclear power, paced by the exotic fast-breeder reactor, will begin taking the load off fossil fuels. Nuclear energy may produce 13% of all U.S. power in 1985 vs. less than 1% today and then is expected to boost its share to 26% by 2000″.

    “The fast-breeder reactor will have a major impact because, in seeming defiance of the laws of physics, it produces more atomic fuel-plutonium-than it burns.”

    The article stated that experts in nuclear energy were confident in the success of fast-breeder reactors. As of 2009, there are no fast-breeder reactors in the U.S. and it’s questionable whether fast breeder reactors will ever provide energy for the U.S. The Clinch River experimental reactor, in Tennessee, was shut down years ago because of exorbitant costs and technical problems. Rather than providing 26% of our energy needs in 2000, nuclear power provided about 8% of total U.S. energy demand.

    In terms of oil shale, the article stated:

    “By 1985, rising prices of crude oil and natural gas may force two other promising developments onto the market–oil produced from shale so abundant in the American West, and gas produced from coal fields. There are pilot plants using both processes, but so far their output is too costly to compete. Shale oil, for instance, would cost about $7.50 a barrel vs. the present price of $3.25-$3.50 for a barrel of crude.”

    Years ago I read congressional testimony from an executive of Exxon who at the time, ~1980, expressed the belief that by 2000, the U.S. would be producing 2 mb/d of oil from oil shale and 8 mb/d by 2025. As of 2009, no oil is produced from oil shale and it’s likely that no significant amount will be produced in the next 16 years.

    The article mentioned atomic fusion and hydrogen, giving the impression that both would be a possibility at some point in the not-so-distant future. It’s not unusual to see similar statements today in media articles about energy.

    It’s nice to be optimist, but it’s wise to be realistic when it comes to energy. There is considerable optimism these days, at least among some people, about cellulosic ethanol and oil from algae. In my view, these energy sources will only go as far as government subsidies take them.

    There is also considerable optimism about electric and plug-in hybrid electric vehicles. Electric vehicles have been around since ~1890 and were fairly common in the early 1900s. The problems that have historically dogged electric vehicles have not suddenly gone away and I think those problems will limit their extent of market penetration in the future.

    First, electric vehicles have been and continue to be expensive relative to petroleum-based vehicles. Plug-in hybrid electric vehicles will be expensive as well.

    In a Time magazine article (Sept. 29, 2008), Bob Lutz, Vice Chairman at GM, was quoted as saying that GM hoped to bring the cost of the Chevy Volt (plug-in hybrid electric) down to $40,000 or less. Even if GM gets the price of the Volt to less than $40,000 (assuming they ultimately produce it), it won’t be much less. When taxes, options and freight are added in, the price could be considerably above $40,000.

    If we assume a price of $40,000 for a Volt, how does that compare to a Nissan Versa in economic terms. According to the Nissan website, the Versa can be purchased for $10,000 so there is a $30,000 difference between a Versa and a Volt. The government is supposed to give a $7,500 tax credit for the Volt so a Volt would cost a buyer $32,500, or about three times that of a Versa.

    Let’s assume you buy a Versa and drive an average of 10,000 miles a year, the car gets an average of 30 miles/gallon and the average price of gasoline over the time period you own it is $4.00/gallon. How long can you drive the Versa before you have spent $22,500 (the difference between $32,500 and $10,000) on gasoline? The answer is nearly 17 years. That’s considerably longer than most people own a vehicle.

    I expect a practical electric vehicle to cost at least $35,000-40,000, if not more. Electric vehicles will be out of the price range for a significant portion of the American population. There will be relatively wealthy people who will buy electric vehicles and rave about how wonderful the vehicles are, but that won’t convince those individuals who can’t afford an electric vehicles to buy one.

    Many people who buy motor vehicles buy them with some expectation that they can use them for carrying and towing purposes. Electric and plug-in hybrid electric vehicles will not be vehicles you’ll want for towing purposes because of a lack of power, and their carrying capacity will be seriously limited. I may change my view of the towing capacity of electric vehicles when I see one pulling a snowmobile trailer with 6-8 snowmobiles up to the UP from southeastern Michigan as I often see with diesel-powered vehicles.

    There are also the problems of range, especially when various electrical accessories are used, and battery charging time.

    Whether people want to accept it or not, oil has significant advantages compared to alternatives. Two advantages that most people aren’t aware of are that oil distillates have very high enthalpy of combustion values and that distillates have high energy densities.

    Enthalpy of combustion is an important factor related to the energy density of a fuel and the fuel’s power capacity. Energy density is an important factor in defining how far a vehicle can go on a tank of fuel and how large the tank has to be.

    Table 1 displays enthalpy of combustion values for various fuels.

    table1.jpg

    Based upon the data in Table 1, it would take about 18 times more hydrogen molecules, 8 times more methanol molecules and 4 times more ethanol molecules to obtain the same amount of energy as obtained from a molecule of octane.

    Table 2 displays energy density values for various fuels. The high energy density of oil components makes them particularly valuable as transportation fuels due to the small volume required for containing high energy content.

    Table 2: Energy Densities for Common Fuels

    table2.jpg

    The energy densities of H2 and CH4 are much lower than octane because they are gases. In gases, the molecules are much farther apart than in a liquid, even when gases are compressed to very high pressure. The low energy densities of gaseous fuels make them poor choices for transportation applications even if they are compressed to very high pressures. For gaseous-fuel-powered vehicles, the fuel tanks must be much larger, the vehicle must get much better mileage per unit of fuel, the vehicle must be refilled more frequently or some combination of the three must be used.

    There is considerable talk now about making the U.S. energy independent. Although it’s a laudable goal, I don’t see that happening without major changes in the American lifestyle. With declining future U.S. oil production, it would not be surprising to see the percentage of U.S. oil imports increase even if we manage to reduce our oil consumption rate in coming years. That is a problem that most, if not all, politicians would prefer not to admit to the American public.

    (Note: Commentaries do not necessarily represent ASPO-USA’s positions; they are personal statements and observations by informed commentators)

     

    table1.jpg [File Size:29.5KB/Download:21]

    table2.jpg [File Size:50.8KB/Download:23]