THURSDAY EDITION

November 21st, 2024

ICONS Home :: Archives :: Contact  
321energy

more 321energy

editorials

 
Oil & Iran
Clive Maund  Oct 07  

Oil Market Update: back in buying territory after post-breakout correction
Clive Maund  May 20  

MID-EAST ESCALATION LOOKING SET TO DRIVE OIL HIGHER...
Clive Maund  Jan 12  

Oil Market Update - IS THE CURRENT UPTREND A "HEAD FAKE"?
Clive Maund  Sep 18  

Oil Market Update - GIANT TOP APPROACHING COMPLETION...
Clive Maund  May 05  

»» more editorials in the archives

market data

»View Commitment of Traders.

expert analysis & newsletter briefs
featured companies


from the publisher
  Robert J. Moriarty

Welcome to 321energy.



The Role of Technology in Meeting Current and Future Petroleum Energy Demand

John Rudesill

Author: I n f i n i t e   E n e r g y
August 31st, 2005

The current debate over “peak oil” is both vigorous and contentious, reflecting the high stakes involved. Optimists maintain that the future availability of petroleum crude oil is unlimited and there is no need to worry about satisfying rapidly increasing consumption, even though production in major mature fields is declining and the rate of major new discoveries is in danger of not keeping pace with reserves depletion. Pessimists assert that the historical peak in petroleum production is imminent and will herald dire shortages and political unrest. Other articles in this issue address the scientific objectivity and validity of these opposing viewpoints.

Many discussions of “peak oil” only briefly consider the role of technology, as though it was not important in meeting petroleum product demand. Our task in this article is to assume as a given that supplies of natural petroleum are unlimited and then attempt to answer the following question: What role does technology have in limiting our ability to meet current and future demand for petroleum products? We will endeavor to justify to the reader our conclusion that while technology does play a very important role in meeting our petroleum fuel needs, that role is dependent on investment and, ultimately, political priorities.

Introduction and Background

A meaningful discussion about any role technology plays needs a working definition of the term technology. In the most general case, technology can be seen as any means that enables and/or facilitates the accomplishment of a given useful result. We may recall the old cartoon image of a cave man struggling to move a stone too heavy for him to budge. Then, he gets an idea (light bulb above head) and fashions the first lever made from a sturdy tree limb and he is then able to pry the stone to move in useful increments with reasonable effort. This example demonstrates the principles of mechanical advantage and leverage. Ideas are initially intellectual curiosities and only become technology when they have demonstrated a practical ability to achieve a given useful economic result. In this sense, language and mathematics, for example, qualify as technology when applied to get a useful result. The value of technology is a function of the magnitude of its potential to create economic advantage. The value of some technologies is so great that the political power structures are obliged to assert control over access. Energy supplies and weapons of mass destruction are two such areas currently subject to careful monitoring and control.

Petroleum products reach consumers after passing through a multitude of complicated processes highlighted by several key steps. The sequence begins with exploration and production (E&P). This phase includes the exploration, drilling, discovery, well completion, production of the crude petroleum oil from the well, and transport to refineries. At the refinery, the raw crude oil is converted into its various petroleum products and from there transported to consumers. Many of these conversion steps require intense and sophisticated technology to achieve. As we study the key steps, we will focus on the current status in the U.S. (the world’s largest and most sophisticated refining industry). We will then review a range of technology choices we have available toward meeting increasing petroleum product demand, especially in developing countries, and we will project the likely consequences of these choices. Finally, we will conclude with some preferred courses of action.

Petroleum products are essentially all liquid hydrocarbons as used. There are many new and historical processes to synthesize liquid hydrocarbons from other natural fossil carbon sources that are not formally considered petroleum crude oil. Technology provides the methods to either extract synthetic crude in the case of oil shale and tar sands or convert in the case of coal, bitumen, petroleum coke, petroleum resid, and certainly natural gas to the desired liquid hydrocarbons. All of the synthesis scenarios require gasifying the carbon source and are often lumped under the subject “gas to liquids” (GTL). We will also have some commentary on the developing schemes to make use of the large, even vast, untapped natural methane resources in remote locations, including methane hydrates in permafrost and ocean bottoms.

A wide range of resources is available to anyone interested in learning about petroleum issues and many are online from the U.S. Government DOE and EPA web pages and by many major oil companies, for a start. There are many recent books out, as well as older sources and endless blogs one can peruse for further perspective. Where a source is directly used it will be noted; otherwise the views are those of the author.

To help the reader grasp the magnitude of the petroleum industry on Earth, it is useful to know some contextual background information. Total worldwide daily production in round figures is about 80 million barrels per day (b/d)—1 barrel = 42 gallons. This figure includes actual crude oil and the crude equivalents of light hydrocarbon liquids condensed from natural gas production. The U.S. consumes about 20 million b/d, or 25% of the world total. This means that ~300 million people, only ~5% of the world’s population, consume one-quarter of the petroleum produced. Petroleum is 40% of the total energy consumed in the U.S. and about 90% of that is used in transportation of all types. This converts to 2.8 gal. per day per person in the U.S., while the rest of the world averages only 0.42 gal. per day per person. The least developed countries have consumptions less than a tenth of a gallon per day. We are clearly the big consumers. It is important to realize, however, that if the U.S. stopped using any oil, it would only raise the average daily per capita of the remaining six billion humans to 0.56 gal. per day. Conversely, for the same six billion humans to enjoy our standard of consumption would require production increases from 80 million b/d to 400 million b/d. That is a five-fold increase in production and use of petroleum. Even the optimistic oil supply proponents would have to admit that this level of greenhouse gas carbon dioxide emissions is asking for climatic problems.

Here is a surprising statistic (from a DOE report by Energetics, attributed in the Refining Section) that gave me pause. While the petroleum refining industry in the U.S. provides 40% of our total energy as stated above, it also consumes 7.5% of the total! When normalized to a petroleum basis, we find that the equivalent of almost 19% of the energy value of the petroleum itself is consumed during refining. Further consumption occurs during production, transportation in crude tankers for imported oil, and from the refinery to the end user, all of which perhaps add another 1% to the 19%, giving ~20% as the total self-consumption. This is a very substantial conversion loss and denotes the intense technologic input into the crude petroleum needed to raise its quality to meet the standards for clean and convenient use for transportation engines, heating, and power generation. A phrase, “wells to wheels,” was coined to capture the overall efficiency for realizing a unit of motive energy at the wheels of, say, a car compared to the energy potential at the well where the necessary quantity of petroleum originated. In the following sections, we will see how technology drives this efficiency.

Exploration and Production

Natural petroleum crude oil has been known since ancient times at places where it naturally seeps out from the Earth’s surface, e.g. the famous Brea Tar Pits. The modern extraction of petroleum dates from 1859, started by Col. Edwin Drake in Titusville, Pennsylvania. Crude oil is composed primarily of carbon and hydrogen atoms combined in thousands of different molecular variations ranging from a low molecular weight of 16 g/mol for methane CH4 up to 1,000 or more for very large asphalt molecules. Most commercial crude oils have an average molecular weight in the 400-500 range. Crude oil composition varies greatly from one source to another, such that each producing zone has a unique chemical fingerprint identity. There are many impurities in crude oil, with sulfur and nitrogen atoms being the most common and problematic. Both can exceed 1% by weight and are substituted for carbon atoms in the hydrocarbon molecules. The most valuable crude oil has both low sulfur (< 1% and is termed “sweet crude”) and has a lower average molecular weight “light crude.” Such “light sweet crude” requires the least refining and yields the most gasoline, leading to the highest refining margins. Lesser amounts of oxygen and other trace metal atoms like nickel and vanadium also occur and sometimes present special challenges during refining, which will be addressed in a later section.

Petroleum geologists are the specialists who probe the jumble of rock formations as much as five miles below the surface on land and below the ocean bottoms and decide where to drill exploratory wells. This is the highest risk part of the “wells to wheels” enterprise and enormous investments are required to proceed. How then do petroleum geologists decide where to drill exploratory wells?

In the days of Col. Drake, drilling was attempted by guess work based on surface seeps and often resulted in producing wells from as poor as 1 in 40 attempts to 1 out of 10 times at best. This form of near random drilling is truly “wildcatting.” We can see a diagram of the various technologies used to find oil on the Chevron Texaco website (“Petroleum Prospecting Primer”): http://www.chevron.com/learning_center/primer/.

Finding petroleum and producing it has benefited from dramatic advances in technology in recent years and in some cases has been in the forefront in the specialized development and uses of computer hardware and software. Exxon’s website (http://www.exxonmobil.com/corporate/Campaign/Corp_ campaignhome.asp) has information on many aspects of E&P technology. We will summarize the relevant points in the following paragraphs.

The interest in petroleum crude oil in the second half of the nineteenth century was mainly to replace coal oil in kerosene lamps introduced in 1854. The gasoline fraction was burned off, as it was too volatile for safe use in technology of the day. What this points out is that technology advances play a powerful role on the demand side of the economic equation as well as the supply side. Toward the end of the nineteenth century, the spark ignition internal combustion engine became practical and the demand for the gasoline fraction of crude oil to fuel it started to grow even as the electric light was displacing oil lamps. The twentieth century saw continuous growth in the demand for gasoline to fuel automobiles and the near complete dominance of electric lighting.

An oil well success ratio of 1 in 10 was unacceptable and geologists began to recognize that certain types of geologic sedimentary basins overlay potential oil-bearing strata. They also began to realize that natural petroleum was apparently forming (biogenic or otherwise) in what are called source rocks containing trapped decomposing carbonaceous matter. Mobil liquids then migrated to nearby porous reservoir rock, where it stayed only if it was capped by impervious strata that formed a trap. It is very expensive and slow to map a promising formation with numerous core drillings as is used for mining surveys. The outcome was the development of seismic surveying technology, begun around 1930. It is a crude form of what we today recognize as high-tech medical ultrasound imaging. Geologists would place a string of evenly spaced geophones (like microphones) in straight lines over the strata of interest and then set off a dynamite charge to create a low-frequency sound pulse. The geophones would receive the reflected sound pulses and the signals from each phone were recorded either on paper strip charts, or magnetic media in recent years. The plotted result was a “2D” two-dimensional slice image of the underlying strata. The resolution was poor by today’s standards and there is a current effort to re-examine old seismic surveys with modern interpretation tools and see if reservoirs were missed. As technology improvements provided better quality data acquisition and faster computers, it became practical to run parallel 2D slices and construct a “3D” three-dimensional image of the strata. An array of thousands of geophones is used in modern surveys. Some of the most powerful super-computers are now used to process seismic data and Exxon has what they call “4D” imaging, which even shows the movement of oil in the reservoir strata by repeating “3D” seismic surveys spaced over time. Petroleum geologists and reservoir engineers employ very sophisticated and highly proprietary software to model the life of a producing reservoir so they can optimize initial well placement, production and depletion rates, and plan eventual rework and maintenance of the reservoir. The incentive, of course, is to maximize the recovery and profits long term. The successful drilling ratio now approaches 1 in 2.

Seismic survey has been adapted to map the strata below the ocean bottom by towing strings of evenly spaced hydrophones behind a vessel and using compressed air guns to supply the sound pulses. Today deep water offshore oil wells are made possible by remarkable advances in technology. It is now practical to drill from a floating platform in 7,000 feet of water to depths of five miles below sea bottom. When combined with offset drilling (advent in late 1970s) that enables horizontal boring, we can access ~100 square miles of under sea producing zones from a single platform using multiple offset wells splayed out like spokes. A deep water project with a floating platform and multiple wells will cost several hundred million dollars and take several years to start producing. The ocean bottom now accessible for exploration and drilling has expanded tremendously by the innovation of floating platforms with multiple offset wells and is no longer limited to the continental shelves along each continent’s coast.

Wells were drilled for decades by mounting a fixed drill bit on the end of the drill pipe string and then rotating the entire drill string lubricated by specially formulated drilling mud. That is quite a trick when you have miles of pipe hanging on the derrick and drive mechanism and then you have to replace a dull or broken bit! The entire string of pipe has to be removed piece by piece and then reinserted after the bit is changed. The offset drilling technology uses a downhole electric motor-driven cutting head that can be canted like a wrist joint to accomplish the turn from vertical to horizontal. The heads are guided by various technologies, including inertial. The systems have to be able to operate at elevated temperatures up to 400°F and pressures up to 20,000 psig and are models of robust engineering.

Materials technology has helped improve drill bit longevity and cutting efficiency through advances in materials, including diamond coatings and other super hard coatings for the bits.

There are a number of high-tech downhole instruments used to analyze the strata the hole is exposing for signs of oil and for information about the porosity of the rock. Among the instruments used are radiation sensors, gamma and neutron probes, acoustic porosity sensors, temperature, conductivity, and others. The measurements are logged as a function of depth and used to determine the location of hydrocarbon zones and rock porosity. The field engineers use the data to make recommendations for finishing the well to put it into production.

A final note on high-tech instruments in the exploration phase: Gravity and magnetic anomaly maps are made from aerial surveys with gravity meters and magnetometers and are used to help decide where to begin seismic surveys. Sniffing instruments are also in use at or near the surface to sense telltale gases often found above oil and gas deposits. Portable gas chromatographs with various detectors can be used for this purpose.

During production, crude oil frequently is associated with brines and a certain amount of brine is produced with the crude oil. The brine must then be separated and disposed of properly. Crude production takes place in several phases. Essentially all 11 million b/d of the Saudi Arabian crude oil is from what is called secondary production using what is termed “water flood.” This means water from either deep aquifers or salt water from the Persian Gulf is pumped down injection flood wells and the brine pushes or sweeps the crude oil along to the producing wells, where it comes to the surface under pressure from the water flood. If water injection is stopped, oil production soon stops. Reservoir modeling technology is used to optimize the placement of the water injection wells and to determine the optimum pumping rate to recover the most total crude oil. Primary production results from natural gas and hydraulic pressures in the producing zone forcing the crude oil to the surface. So-called gushers result when this pressure is allowed to vent without control. Rapid release of this pressure must be prevented, as doing so can severely damage the well so that ultimate recovery is much less. Production rate declines as the natural pressure falls off after a few years in service and if the producing zone is not suitable for secondary water or carbon dioxide flood, then a lift pump is inserted into the well and the well is called a “stripper well,” yielding less than 20 b/d. There are huge numbers of stripper wells operating in all the older major producing regions. I remember as a child seeing them in the early 1950s in Southern California.

One of the major production challenges is transporting the oil from where the wells are to where the demand is. In the case of the U.S., we import several million b/d onboard supertankers. Here again technology has delivered by providing floating tankage that can exceed two million barrels in one load. These massive ~1,000’ long vessels are double hulled for protection from hull breaches and leakage and they are so automated that the crew may only be a couple of dozen. They typically travel at 12-15 knots underway and may reach the U.S. Gulf Coast in about three weeks from the


Figure 1. Refinery outputs 1996. From the Petroleum Refining Industry Study, available at www.eere.energy.gov/industry/petroleum_refining/pdfs/profile.pdf

Middle East if they are too large to pass the Suez Canal. These ships are not built overnight, nor are the shipyards that build them readily available, and so as consumption remote from the wells increases it will be necessary to build more of these supertankers in phase with the demand increase. The current tanker fleet contains many old smaller tankers that need either to be scrapped or overhauled, but the needed investment is lagging, increasing the risk of oil spills.

Refining

In the final sections of this article, we are going to proceed with the idea that the foregoing E&P section is rolled into our base assumption that the crude oil supply is unlimited, so that we can focus on any roles technology plays in limiting the ability of the crude oil refining infrastructure to meet current and increasing demand for petroleum products.

Background
First, we need some background to establish the context. A very thorough study of U.S. refinery capabilities and technology issues, including energy consumption, is available at: http://www.eere.energy.gov/industry/petroleum_refining/pdfs/ profile.pdf. This 124-page report prepared by Energetics in Columbia, Maryland, was issued in 1998 and is slightly out of date. I have adapted material from it for use in this section.

Crude oil refining is a large mature industrial enterprise that does mainly two things: One, it removes unwanted impurities from the crude oil. Two, it converts the jumble of large hydrocarbon molecules into the structures most valued by consumers, i.e. gasoline. The U.S. market demands about 45% of the crude oil be converted to gasoline. See Figure 1 and Table 1 for breakdown of the typical U.S. refinery product slate and note that there is a volume increase through the refinery due to a decrease in density of products from the crude oil feedstock such that a 42 gallon barrel of feed gives 44.6 barrels of products. It is interesting that we have not built any new grassroots refineries in the U.S. since the early 1970s, coincidentally when the U.S. crude production was peaking at over 12 million b/d. Instead there has been an ongoing closure of refineries and consolidation of ownership into a few rather large refining companies. Since both U.S. and world crude demand has steadily increased, it has been necessary to build refineries in other parts of the world (South Korea, India, and Indonesia, to name a few) to match refining capacity with demand. The present U.S. refining


Table 1. Average yields for U.S. refineries in 2000. Available online at: www.eere.energy.gov/industry/petroleum_refining/profile.html.
capacity is about 16 million b/d, so to meet our 20 million b/d consumption we have to import effectively 4 million b/d of refined products. Our domestic crude production is about 6 million b/d, so our refineries are processing about 10 million b/d of imported crude oils. The detailed accounting and logistics to sustain this enterprise in itself embodies considerable advanced technology and if it were short-changed could lead to supply disruptions. A generic oil refinery flow diagram is shown in Figure 2 to give some idea of the complexity of a modern refinery with the various process units and their interconnections.

Operating Priorities
After 30+ years as a fluid cracking catalyst research and development engineer, I came to appreciate the blunt priorities that govern how refineries are operated regardless of how good we thought our catalysts performed. One of the main priorities seems to be that more of any product is not always better in terms of business economics. More important is having just the right amount of each product at the right time and at the right price. To accomplish this is perhaps where the most brain power and computer modeling (financial linear programs) technology in the oil business resides. When most of us think of a refinery and how it meshes with our energy infrastructure, we would expect that the more a given refinery could make of any product the more profitable it would be. That is too simplistic for today’s global integrated economies. Oil refining is a highly competitive commodity business that is highly interactive with the real and perceived changes in supply and demand. All major refining companies engage in commodities trading in futures for both crude supplies and for product deliveries to hedge the effects of the volatile market.

We have noted that a high level technology is used to manage the refining assets to just meet the market demands. How does that translate down to the physical and chemical technology at work in the refinery? A more familiar analogy may help here. Consider a large integrated bakery that takes in flour as the main ingredient and then has to try to use as much of the machine capacity they have to make each item, such as bread, donuts, cookies, pizza dough, etc., and somehow match exactly what the consumers want of each item. If they make too many cookies, they may have to sell some at a loss or even trash them. If they don’t make enough bread, then they lose sales and the consumer may buy from a competitor. The engineers and managers running an oil refinery face similar challenges.

As we said before, the oil refinery removes impurities from the crude oil and restructures the molecular stew to meet standardized product specifications. The capacities to do each of these processes were set when the refinery was planned and that was based usually on a particular crude oil composition and a particular expectation for product demand. Since U.S. refineries are about 30 years old, they are not seeing either the crudes they were designed for or the product demands and specifications expected. These changes are part of the reason that some refineries became unprofitable and were shut down. Consumers have demanded cleaner, more environmentally friendly low-sulfur fuels at the same time that refiners are driven to process less costly higher-sulfur crudes just to survive. This bind has further contributed to refinery shut downs, as the investment necessary to make low-sulfur products from increasingly highersulfur feeds is prohibitive for some companies. Wall Street investors continue to prefer to make higher short term returns on many other flashier high-tech investments than oil refining stocks.

Physical Separations
Desalting
The first process the crude oil sees in the refinery is called desalting. It is a physical separation that removes water soluble salts, mainly sodium chloride and suspended particulate contaminants. The crude oil is made into an emulsion with water, similar to mixing oil and vinegar. The salts dissolve in the water and then the mix is allowed to separate by gravity and the denser water settles to the bottom. The sour water and sludge is drained off to waste water treatment. Any salt that remains in the crude can cause corrosion and critical loss of metal from process equipment leading to expensive leaks and downtime. Desalting is important technology, but refineries can and do run without them for a price.

Crude Distillation
The second major step is called atmospheric crude distillation and it is a thermal physical separation of the various molecules by boiling point. The number of carbon atoms in a hydrocarbon correlates well with the boiling point. Distillation requires heating the crude to about 700°F before entering the column. Fractionation by boiling point takes place and typically five product streams are drawn off at different levels on the column. Any hydrocarbon boiling above ~650°F leaves the bottom of the column and is called atmospheric resid. The next stream above is diesel, then kerosene/jetfuel, then naphtha (straight run gasoline), and lastly an overhead stream containing mostly C3 propane and C4 butanes. The products are sent to storage tanks prior to further treatment. Crude distillation is basic technology, but it is very heat energy intensive and thus is part of the 19% of internal energy usage in the refinery. Such high energy usage could create production limits if other demands for fuel take priority.


Figure 2. Simplified refinery process flow diagram.. From the Petroleum Refining Industry Study, www.eere.energy.gov/industry/petroleum_refining/pdfs/profile.pdf.

Chemical Conversion
The conversion steps begin after crude distillation because no product fraction yet matches a saleable product specification and the demand for ~45% gasoline well exceeds the 10- 20% yield of straight run very low octane gasoline. The larger molecules in the heavier high boiling fraction contain most of the unwanted sulfur and other contaminants that need to be removed. The chemistry of hydrocarbon molecules shows that methane has the highest content of hydrogen and as the number of carbon atoms increases the hydrogen content goes down. While methane has four hydrogen atoms per carbon atom, the largest multiple ringed molecules may have less than one hydrogen per carbon. This means that to make smaller molecules from large ones by thermal or catalytic cracking processes will create molecules that are deficient in hydrogen. These molecules contain one or more carbon-carbon double bonds (olefins). If three double bonds are linked alternately with single bonds in a hexagonal ring, we have benzene that is the smallest of what are known as aromatic compounds. Benzene is subject to limits in gasoline due to its cancer-inducing toxicity and is the focus of technology development to minimize its formation.

Hydrogen Addition
The refiner is faced with overcoming the inherent deficiency of hydrogen in the heavier molecules in the crude oil either by rejecting carbon or adding hydrogen. To add hydrogen requires a source of hydrogen and that is usually natural gas. Methane is reacted with steam and some oxygen to form carbon dioxide and hydrogen. This is a significant expense borne by all refiners. Hydrogen is added back to the deficient molecules under high pressures up to 3,000 psig and temperatures around 650°F over high technology catalysts (catalysts promote a chemical reaction without themselves being changed). The catalysts are actually designed to very specifically break carbon bonded to sulfur, nitrogen, nickel, and vanadium and then add hydrogen in their place. This process is called hydroprocessing and is the main way that sulfur is removed from refinery process streams. The nickel and vanadium are left behind as metal sulfides on the hydroprocessing catalyst pellets and ultimately fill the catalyst pores to the point the unit must be shut down and the catalyst replaced. Here catalyst technology has a role in determining how efficiently the sulfur and other contaminants are removed and for how long the hydroprocessing unit is on stream before catalyst change is needed. Catalysts may not even last a year when used for the heaviest highest sulfur resid streams that are seen more and more in refineries. Lower sulfur streams like naphthas can see a catalyst life of three to five years. Environmental mandates for lower and lower sulfur in gasoline, jet fuel, diesel, and fuel oils place difficult challenges on refiners that can only be met by investment in hydroprocessing capacity and the associated hydrogen generation capacity. Catalyst technology improvements can lower costs and increase conversion efficiencies. Again the issue of investment returns not matching other potential returns causes refiners to delay equipment upgrades as long as legally allowed.

We have given a brief description of how sulfur and other contaminants are replaced with hydrogen. Now let’s look at how carbon is rejected in other major process units in a typical refinery. There are two main types of carbon rejection process units in the refinery. The first is the fluid catalytic cracking unit and the second is the delayed coker unit. In both units, a shift of carbon and hydrogen arrangements occurs where large molecules often boiling above 950°F are broken or cracked and a low hydrogen coke residue is made along with a full boiling range of liquids, some C1-C4 gases, and hydrogen. Some sulfur and nitrogen atoms are cracked out of the molecules and are recovered as hydrogen sulfide and ammonia.

Catalytic Cracking
The fluid catalytic cracking unit (FCCU) in a large refinery is an awesome example of industrial technology prowess. It is physically the largest unit in most refineries, often reaching 150’ or more in height and in some cases the regenerator vessel may be 50’ in diameter. Each unit is unique due to retro fits and custom upgrades. The performance of the FCCU more than any other determines the profitability of the refinery, although some refineries do not have one. The heavier one-third of the overall crude fed into a refinery goes through the FCCU (5-6 million b/d U.S.) and about 50% is converted to gasoline. The original units were built for World War II by Exxon. The most recent units built handle over 150,000 b/d! This means feed is entering at 4,400 gal./min. The fluid part of the unit name refers to the powdered catalyst which circulates like a fluid at rates exceeding 100 tons/min. in this example. The catalyst is reused over and over again by separating it from the cracked products and burning off the coke deposits before bringing it back to contact more feed. The FCCU typically rejects from 3-10% of the feed carbon as coke depending on the hydrogen content of the feed. The catalyst does break physically and is removed as fines. It also loses activity requiring withdrawals and additions of fresh catalyst. Our example unit may consume 20-30 tons/day of fresh catalyst. The technology in the catalyst can be a limit to the conversion performance of a given FCCU. The FCCU catalyst is the single biggest operating expense of the FCCU and is watched closely by refinery management. Improvements in catalyst technology are limited by the willingness of refiners to pay for it. Current catalysts list in the range $1,700-2,100/t so in our example the daily expense is around $50,000. The leverage it enables is much higher though. Even at only $1/b, the gain is $150,000, or three to one. The real gain is more like $5-10/b with today’s high crude prices. If you want to own one of these profit engines, be prepared to spend several hundred million dollars and wait two to five years to get it built and running. That is just for the FCCU and you would need the rest of the refinery to actually be in business, putting out an easy billion dollars or more. This is why few refineries are built today. Upgrading of older U.S. refineries is ongoing.

Coking
The coker is the other major carbon rejection unit and it has a more modest role than the FCCU. No catalyst is used. The very heaviest carbon deficient fraction of the crude are sent to the coker where they are heated to 850-900°F and circulated through a large silo-like drum for 24-30 hours until the drum is full of coke and then feed is moved to a waiting empty coke drum. The coke is steamed out to cool it and then cut out with water jets. The coke is eventually used as power plant fuel or made into anodes for aluminum metal reduction furnaces. Refineries will typically send 10-15% of the very heaviest crude fractions to the coker. The reaction yields about 65% liquids and gases and ~35% coke. Cokers are lower cost than FCCU’s and can be added to existing refineries to enable processing of heavier cheaper crudes. The coker is a basic low-tech process yet is very useful and increases the flexibility of the refinery to take advantage of lower price spot market opportunities for heavy crudes.

The gasoline liquids from the FCCU and the coker still don’t add up to the demand for 45% gasoline product. Further, the straight run gasoline from the crude unit is of such low octane value as to be unusable in modern engines without upgrading it.

Reforming
A unit called the reformer is used to take the straight run naphtha or gasoline from the crude unit, and sometimes coker naphtha, and rearrange the molecular structure. The reforming reaction uses a high technology platinum based catalyst to change low octane straight chain hydrocarbons (paraffins) of 5-12 carbons to branched chains (isoparaffins) and some aromatic ring compounds. This raises the octane dramatically and is blended into the final gasoline product. Some byproduct hydrogen is also produced that is used in hydroprocessing. This conversion process is under scrutiny because it makes significant amounts of toxic unwanted benzene. The octane upgrade is essential, but the benzene content is a technology limit. An interesting aside about South Korea versus the U.S. is offered. In the U.S., the gasoline we buy at any major brand name retail station could be refined at any refining company and blended to specifications for the particular retail brand’s specifications, including custom additives. This simplifies distribution logistics for the large geographic area of the U.S. South Korea is much smaller and by law each retail brand must sell only gasoline made in their parent company’s refinery. This means that they can actually compete at the pump over which company’s gasoline has the lowest benzene. Koreans are environmentally conscious enough to make benzene content a product bragging rights issue and the company with the lowest benzene could command a slight price advantage or take a larger market share.

Alkylation
We are still short of the total gasoline needed and there is one more unit that makes an important contribution to the gasoline pool. It is the alkylation unit. Here propylene C3= and butylenes C4= are fused over an acid catalyst with a branched C4 isobutane into highly branched C6, C7, and C8 molecules with octanes in the 90-100 range. Alkylation technology makes some of the most desirable molecules one could want to meet all of the properties of EPA mandated reformulated gasoline. However, the most efficient alkylation process uses highly toxic hydrofluoric acid (HF) and this scares people. Many refineries have been required to convert to the less desirable sulfuric acid catalyzed process. Alkylation is another limiting technology, especially with the phase out of the oxygenate MTBE. Alkylate is the logical replacement, but again investment lags in increasing alkylation capacity. Also, the refinery can only make as much alkylate as they have isobutane, propylenes, and butylenes to feed to it. Almost all of the propylene and butylenes in the refinery are made in the FCCU. Additives for the FCCU catalyst are used that can convert some of the FCCU gasoline into C3, C4 olefins to feed the alkylation unit and this gives the refinery some flexibility in meeting final gasoline compositions. Catalyst technology does limit the ability of current alkylation processes to provide desired molecules for blending in the final gasoline.

Hydrocracking
There is one other major refinery unit that is important for making high quality distillates used in jet fuel and diesel products and that is the hydrocracking unit. This unit combines the high pressure hydrogenation environment of hydroprocessing with some cracking activity in the catalyst. Some of the most important work in petroleum catalysis was done in the hydrocracking area by major oil companies and catalyst companies. Licensed proprietary products were used for many years. This is an area of significant technology content and some work continues. If the demand for low-sulfur jet fuel and diesel increases faster than for gasoline, it could begin to limit supply unless investment in additional hydrocracking capacity is made.

Petroleum Liquid Fuels from Other Carbon Sources

Large amounts of fossil carbon exist in other forms that crude oil and some of these resources are being exploited already. The most significant example is the extraction and conversion of bitumen from oil sand into syncrude in Alberta, Canada. This operation supplies about 13% of Canada’s liquid fuels and some is exported to the U.S. The reserves are 2-3 trillion barrels of bitumen, equal to about 15 years of total current world consumption. It is a technologylimited process because huge amounts of oil sand have to be surface mined and extracted to recover 90% of the bitumen. Large amounts of water and natural gas are also needed to run the process. Current annual output is only equal to a few days of U.S. demand. Large tracts of land are disturbed and the sensitive habitats have to be restored. Until a more efficient extraction technology is developed, oil sand syncrude is not going to rival crude oil any time soon.

There are similar large amounts of heavy solid oil in Venezuela awaiting efficient extraction and conversion technology to really supplement crude oil. We will probably utilize the more liquid heavy crudes found in Columbia, Saudia Arabia, and other locations before the Venezuelan tar is used in any quantity.

Shale oil is another carbon resource (found in large quantities in the Western U.S.) that awaits an efficient technology to make its extraction attractive. It also requires large amounts of process water and disruption of surface habitat.

Gas to Liquids (GTL) This is a technology limited process that can convert many different carbon sources into clean low-sulfur products. We can look to the Exxon website and find that they are the leading major oil company in this technology. There are several competing catalytic processes to make liquids from gasified carbon sources. They are all based on the German Fischer-Tropsch hydrocarbon synthesis known for almost a century. In this reaction, the carbon source is partially oxidized in air or pure oxygen to give mostly carbon monoxide which can be reacted with steam to make hydrogen and carbon dioxide. The hydrogen is then used to react with more carbon monoxide and linear hydrocarbon chains can be synthesized up to solid waxes. The Sasol Corporation in South Africa has been making liquid fuels from coal by this process for decades due to the embargo on petroleum imports during the apartheid era.

Exxon is planning to utilize natural gas that is either flared or not produced to make liquids. There is a lot of gas that occurs too far away from pipelines to use it, so it is left in the ground. There is also a lot of natural gas produced with some crude oils that has no ready access to market and is either flared or the wells are left shut in. Exxon is beginning a large project in Qatar in the Persian Gulf to convert large amounts of natural gas available there into clean distillate liquids potentially usable to help meet low-sulfur diesel targets by blending with crude oil-based diesel. These plants are huge, requiring several billion dollars to make a single 100,000 b/d plant. There are offshore areas where natural gas is available, but they are too remote to access a pipeline and liquefied natural gas plants are not yet planned for remote gas. Exxon and others do contemplate building GTL plants for some of the off shore remote natural gas. The investment required is very large and putting such a plant on a floating platform is unprecedented.

Then there is the enormous amount of methane believed held in methane hydrates at the bottom of the ocean in large areas and in the some polar permafrost soil. Can this be produced economically? That is a speculative issue now, as no proven technology exists to safely bring the methane up from the bottom without leaking it to the atmosphere, where it is a potent greenhouse gas. Some research programs are underway, as the payoff is potentially very rich. The danger is that the unstable methane hydrates could be disturbed by trying to extract the methane from the ocean bottom, triggering a chain reaction of massive methane release and a greenhouse global warming event. There is also concern about disrupting the sensitive ocean bottom ecosystem on a grand scale. A lot of research and development has to be done fast if methane hydrates are going to be part of our fuel mix any time soon. Since most of the methane hydrates are well offshore, they would need a platform GTL plant to make production viable. Here is another investment opportunity choice.

Conclusions

We have touched on a lot of technology, from petroleum crude oil exploration and production through its refining and alternative carbon source conversion to petroleum liquids. A single theme repeatedly occurs. There are real technology limits at key points in these processes, but they are always dependent on investment. So even if the fundamental natural supply of crude oil is unlimited, that fact alone does not mean we will have enough petroleum products to meet increasing demand. Diligent scientists and engineers can likely invent innovative technology to overcome many if not all existing limits. However, as long as our collective demands for investment returns are too high and too short term to properly underwrite the large long cycle time investments in our petroleum energy infrastructure, we risk precipitating disruptions and shortages in petroleum product supply. We should keep in mind also that the demand side of technology can change the outcome quite effectively. The whole SUV phenomenon is mostly a marketing coup and has very little objective necessity. SUVs and other low gas mileage consumer vehicles have artificially increased demand for gasoline in the U.S. We don’t have to consume energy at near the rates that we do. We need to apply return on investment metrics to our personal energy consumption to see what we are getting for our expenditures. We also need to engage the political process to find a way to raise the incentive to invest in long term lower return financial instruments. The intended result is to ensure that sufficient capital is available to support improvements in refining technology and in conversion of alternative carbon sources to liquid petroleum products. The technology versus investment allocation issue seems like a “chicken or the egg first” question and at that point it appeals to our basic strength of will and courage to take the necessary risks and strive to build a healthy rewarding future.

Resources

John Rudesill

Originally Published in Infinite Energy Magazine Issue 60, March/April 2005



Home :: Archives :: Contact  

THURSDAY EDITION

November 21st, 2024

© 2024 321energy.com



Visit 321gold.com