Monday, June 17, 2024

A Republic, If You Can Keep It

Elizabeth Willing Powel asked Benjamin Franklin as he exited the Constitutional Convention, in the then Pennsylvania State House (now known as Independence Hall), "Well, Doctor, what have we got, a republic or a monarchy?", to which Franklin replied: "A republic, if you can keep it."

The United States federal government is an intentionally complex organization. The only small-d democratic institution in the U.S. federal government is the House of Representatives. In the early years of the country, some states had all Representatives elected at-large, rather than from districts.

Originally Senators were elected by state legislatures. The famous "Lincoln-Douglas debates", where Abraham Lincoln was challenging incumbent Illinois senator Stephen Douglas, demonstrates this. Lincoln had won his party's nomination to be their Senate candidate. But the purpose of the debates was to encourage the citizens of Illinois to vote for their state representative candidate from the respective party of Lincoln or Douglas. It was not until the 17th Amendment was ratified in 1913 that Senators were directly elected.

Then obviously, the President is indirectly elected via the ephemeral Electoral College. Indirect elections of the head of government and head of state are actually the norm in most of the world. In parliamentary systems, the only people who directly elect the person who ultimately becomes the Prime Minister are those in that member's district. The head of state for a monarchy is not elected, and Viceroys or Governor Generals are appointed by the monarch, usually after nomination by the Prime Minister. For President-Prime Minister nations, often the President is indirectly elected, as in Germany.

In many parliamentary systems, the upper house is not democratically elected, but is appointed. However, in most parliamentary systems, the power of the upper house is severely limited. The upper houses of federal systems of Germany and Australia are similar to the U.S. Senate in structure, with some differences. France is a rare exception in that the President is directly elected, but he or she shares power with the Prime Minister, and the cabinet positions come from the parliament.

Obviously, the Supreme Court of the U.S. is not democratically elected. The three part of the U.S. federal government, the legislative, executive, and judiciary serve as checks and balances on each other. There is no small-d democratic ability for the people to recall a president, senator, or representative, like there is an ability to recall a governor in some states.

The Constitution's original design for the state legislatures to elect Senators was to address the Anti-Federalist's desire for the state governments to have influence. The 17th Amendment came about because of corruption. Senate candidates often were people who could get nominated and elected through graft, similar to how ambassadorships are nominated today. While that was a problem, unfortunately the solution, direct election of Senators, has turned the Senate into a sort of "Super Legislature", instead of a distinguished upper body.

My point is simple. The people who wail, gnash, and rend their garments over "Democracy!", and claim other countries are "more democratic", are little more than useful idiots with little understanding of how complex organizations work or how the rest of the world works. There are a lot of things we could do to make our system "more democratic" while not requiring significant changes to our basic structure. The U.S. House of Representatives has been stuck at 435 members since 1929. The U.S. population is now 2.75 times as large as it was in 1929. Expanding the House of Representatives would make the U.S. "more democratic." Maine and Nebraska apportion their Electoral College votes with one EC vote going to the presidential candidate who wins each congressional district and two EC votes going presidential candidate wins overall at the statewide level. If such a system were implemented across all of the states, large states like California, Texas, Florida, and New York would have significant EC votes in play for both parties, meaning California would matter for Republicans, and Texas would matter for Democrats. Instead of "Swing States" we would have "Swing Districts" where only one EC vote was in play, and "Swing States" where just two EC votes were in play. Presidential elections would suddenly become much more small-d democratic. Combine a much larger House of Representatives with a nationwide Maine-Nebraska EC model and the presidential election dynamics dramatically change.

There are other changes. In my opinion, the Senate has too much legislative power, and and as a result is too focused on legislation and not its advice and consent role. The entire Congress delegates too much power to the Executive branch. In hindsight, lifetime appointments of judges may not be the best approach. Vox media proposed an 18 year term, where two Supreme Court justice positions would expire during each four-year presidential term. Ending the judicial filibuster was a bad idea, but at the same time, the extreme partisanship around judicial appointees is worse. Senator's terribly worried about being primaried because they voted to approve a presidential appointment is one reason direct election of Senators has created new problems. Of course, if a president had to nominate a prospective justice who could get 60 votes, there should be fewer ideological challenges. But never forget, it was the Democrats who pilloried African-American Clarence Thomas, and filibustered Latino Miguel Estrada and African-American Janice Rogers Brown. 

Ultimately, the answer for better, "more democratic" government is to reduce the centralized power and influence of the federal government in as many ways as possible. Federalism, subsidiarity, and localism are what makes democracy work. Putting more "democracy" in the central federal government will not make life better for the citizens. Potholes, traffic issues, crime, education, etc., these are local issues. Democracy starts at the city council member, not at the Senator or the President.

Saturday, April 20, 2024

We Must Accelerate (e/acc)

[NOTE: 20 April, 2024. This started as an initial draft. I have made multiple updates to this post and will continue to.]

The middle-class in the U.S. is at risk. We often see information that shows the U.S. middle-class has greater wealth, including larger homes and more disposable income, than the middle-class in other western countries. But that is at risk of changing.

This raises the question: "What is the middle-class?" There are a number of definitions from a number of organizations. The Urban Institute defines it as 150% to 500% of the federal poverty level. This definition starts at the lowest level, and tops out at the lowest level. It is fair to say it includes the "lower middle-class" and excludes the "upper middle-class." The Brookings Institution used 20th to 80th percentiles of income. This starts at a point similar to the Urban Institute but goes much higher, and therefore includes what one might consider the "upper middle-class." Pew Research Center uses 67% to 200% of the national median income. This starts much higher than the Urban Institute or Brookings, but goes up to a similar point as Brookings, so it does not include the "lower middle-class" but does include the "upper middle-class." From these three definitions, we can roughly define a lower middle-class, a "middle middle-class", and and upper middle-class.

The first challenge for the middle-class is home ownership has become something only affordable to the upper middle class and wealthy.

The most recent data as of April 2024 shows the median monthly payment for a home purchase rose to $2,775, which requires an annual household income of about $120,000, which is roughly 1.5 times the median U.S. household income.

To understand this, the median monthly mortgage payment in 2021 was $1,001, according to Census housing data. This number represents the average of all mortgage payments, not just recent home purchases.

In 2021, the median monthly payment for a home purchase made that year was about $1,785, which required a household income of $76,500. How many people are making 57% more in 2024 vs. 2021? Housing prices really started shooting up in 2020. If you go back to 2016, the monthly payment for a home purchase that year would have been about $1,428 (requiring $61,200 annual income). If we go back to late 2011, not long after the historically low home prices coming out of the 2008-2009 collapse, a monthly payment for a home purchase that year would be about $1,140 (requiring $48,857 annual income).

The dramatic house inflation that spiked due to the decrease in inventory due to the pandemic caused construction halt, and now the high mortgage interest rates due to the current inflation, have combined to create a "double whammy." A doubling of costs over a decade, while the CPI has going up 32%, and wages have increased by 44%.

Here is something else to think about. If you follow Dave Ramsey's guidance, you should only buy a home on a 15-year fixed rate mortgage and not pay more than 25% of your take-home pay on total housing costs (mortgage principal and interest, home insurance, and property taxes). And make a 20% down payment. To be fair, when Ramsey talks about take home pay, he means gross pay less taxes and normal deductions like health insurance, but not 401k deductions.

What this comes to on average today, based on today's mortgage rates, is you should not buy a house priced more than twice your gross household income. Which means we could not afford the house we currently live in unless we made a 25% down payment, which would be $150,000. That is reasonable if one has a current home with significant equity they are selling, but not realistic if it is a first-time home purchase.

The area we live has gone from what was once probably a 75th percentile income region 20 years ago, to an 80th percentile income region 10 years ago, to a 90th percentile income region today.

To put that into perspective, you must earn 15% more to move from the 75th to the 80th income percentile, and then another 40% more to move from the 80th to 90th income percentile.

Unless something changes, in a generation, instead of three elementary schools within two miles driving distance from our house, there will be at most one, because families with young children will be priced out of the region. Instead of three middle schools and three high schools in the city there will be two, or perhaps just one. Prior to 1991, there were three elementary schools and no middle school or high school in this community. Students were bussed to middle and high schools to the west in the same county. In 1991 a middle school and a high school were built in the community. Rapid growth saw five elementary schools added between 1994 and 2000, another middle school added in 2001, another high school in 2002, another middle school in 2003, another elementary school in 2004, and another high school in 2008. From the way this growth was staggered, it is clear there were a lot of younger children in the community thirty years ago. The last school built in the community was a high school built in 2008. By this point the community had become solidly upper middle class, and many of the people moving into the community were middle aged people with high school aged children. That said, the local public school system is well attended by children of high earning residents living in very expensive homes. The middle school built in 2003 sits directly across the street from a very high-end country club. Directly across from the entrance of the school is the gated entrance and security guard building for this country club. And if you happen to drive by as school lets out, a steady stream of middle schoolers diligently walking across the crosswalk into the country club neighborhood. But yes, a 50 year-old corporate executive might have a youngest child who is a 6th grader. People are getting married later and having children later. But the other reality is the homes in this country club were in the $750 thousand to $1.5 million dollar range a decade ago, and now you probably will not find a home for less than $1.5 million in the neighborhood. It is a fact a decade ago a 40 year-old sales executive could afford a one million dollar country club home, especially given interest rates at the time. They would not be buying the smallest home in the country club, but a mid-tier home. But while the same sales executive today might still be able to afford a one million dollar home despite much higher interest rates because they earn more, the reality is the one million dollar homes cannot be found in the same country club, so they will need to step down to a less exclusive country club.

I remember a conversation relayed to me about my company's current worldwide executive vice president of sales. He bought into a large, expensive country club about ten miles north of where I live about a decade ago. This country club was famous because a former professional baseball player lived there. When someone made the comment the then lower-level sales VP must be doing good to be buying a home on that country club, he replied: "I have the least expensive home in the country club." Someone in the same role today he was in a decade ago probably could not buy into that country club today, even the least expensive home.

The same dynamic hits in lower income ranges and less expensive neighborhoods. It just keeps rolling down. The parents with young children will move further out into the exurbs, and suffer longer commutes, etc. Formerly solidly middle-class neighborhoods in the suburbs become exclusively upper middle-class.

A freezing in-place effect also occurs. The empty nester couple cannot afford to downsize because the higher interest rates make the less expensive condo's mortgage payment higher than the more expensive house they bought a decade ago.

Eventually inflation will come down, and interest rates will come down. As more housing is built, albeit in the exurbs, there will be downward pressure on housing costs. There might even be another housing price collapse like the one in 2008. All of this may correct some of the demographic changes.

However, the demographics of the area were already changing before the latest housing price increases and high interest rates. Registered students have been declining each year over the last several years for most schools. The decline is most noticeable at the elementary school level, but it is happening across the board. Given current housing prices, it is likely there will be demographic collapse of elementary aged children in the community over the next decade. This could be a temporary trough, or permanent.

The second reality is having more than two children with the intention of sending them all to college is increasingly becoming an upper-middle class or even upper-class idea.

Here is some back of envelope math to consider:

A four-year degree at in-state rates at a public university will cost about $100,000.

A four-year degree at out-of-state rates at a public university will cost about $180,000.

A four-year degree at a private university will cost between $250,000 to $365,000. Some private universities costs are on par with in-state public universities, while elite universities are north of $350,000 for four years.

The cost of a university education inflates at about 8% per year, compared to historic inflation of less than 3% per year.

From birth to college entry is about 18 years. So, you get to save for a child's college during a period 22 years, with money starting to be withdrawn four years prior to final maturity. Additionally, saving for college is like saving for retirement. You can invest more aggressively early, but need to invest more conservatively as you get close to the college years.

Over the last 20 years, the S&P 500 average rate of return has been 9.8% (with dividends reinvested).

The reality is, you would need to start with $100,000 invested at the birth of a child to have enough to pay for college when they enter college assuming a normal investment strategy. What this means is you will need to save about $8,000 per child per year in a 529 to completely pay for even an in-state public university education assuming no other assistance or source of funding (such as your child working during college).

Here is something else to consider. To to an out of state public university today costs more in real dollars than it cost to go to Harvard in 1985. As I note above, it will cost about $180,000 to send a kid to an out of state public university starting this fall. If Harvard's cost had grown only in step with CPI since 1985, it would cost about $170,000 to go to Harvard. This means the economic burden on sending a kid to a public university in a neighboring state in 2024 is higher than the economic burden to send a kid to Harvard in 1985. How many early GenXers know someone who went to Harvard in 1985? What kind of family did they come from? Probably very upper-middle class or more accurately, wealthy. The middle class are simply being priced out of college for their kids.

The current birth rate in the U.S. has collapsed to 1.62 births per woman and is lower the higher one goes up in income (and the likelihood of college attendance), so demographics is doing its part, but it portends a significant drop in potential college students in twenty years.

The idea of a commodity (education is simply information and the transmission of it) inflating at 4-5X the rate of CPI should be untenable in a modern society. It is unsustainable. Are people really going to pay $180,000 for an in-state, public university bachelor's degree entering college for the high school class of 2030? What about $325,000 for a public university in another state? Until we address this, we will face a higher education crisis of one form (unaffordability) or another (the shuttering of half of universities). It is already starting, with thirty colleges and universities shutting down in 2023.

Is there an answer? Yes. What is the answer? The answer is we must accelerate the adoption of the most deflationary tool we have access to, and that is technology. Technology is inherently deflationary. Second, we must accelerate the adoption of the most productivity increasing tool we have access to, and that is technology. Technology is inherently productivity enhancing. But there is a tremendous risk, and it lies with those who would "Pause AI." The simple fact is, in 2024, pausing AI is pausing technology. AI is a feature, not an industry. No new technology tool will be implemented without AI features. Those who would "regulate math" should be viewed with great skepticism, and a doggedly pursued using a "follow the money" methodology. Many who are proposing to regulate AI are simply developers of proprietary AI who wish to use regulatory capture to protect their markets.

Technology holds the promise to finally stop the maddening inflation in the areas where we see inflation at significant rates above the Consumer Price Index (CPI). For example, health care and higher education have inflated at a rate far above the CPI. One only need to look at textbook prices. We have a control group, other books, to look at. What is worse is most textbooks contain material that is in the public domain. Certainly new textbooks are needed for certain subjects. Computer science programming classes went from COBOL to C to Java and now Rust. But the content in English literature, history, introductory chemistry, biology, and physics texts do not change rapidly.

Education, at its core, is the transmission of existing information. Every other industry in the business of transmitting existing information has seen a collapse in costs. Witness the change in transmitting a document over the last 50 years. We went from postal mail, to overnight express packages, to the fax machine, to transmitting digital documents via email. Accelerating the transmission of information was also a huge productivity boost. And it was this measurable productivity boost in the 1990s when PCs, local area networks, wide area networks, and the internet created an economic boom that made the sleepy, wooded, once exurb with some scattered homes and a couple of new country clubs into the vibrant, middle-class suburb I now live in. My lifetime has witnessed this wealth explosion where middle class neighborhoods were once defined by 1,400 square foot three bedroom ranch homes and 1,800 square foot four bedroom split-level homes, to where middle class neighborhoods looked like the upper middle class neighborhoods of the 1970s.

Now we see the reverse effect. And we need to stop it. There are late Millennials and Gen Z who now have children of their own, and like the cohorts who came before them, want to live in a single-family home in a good, safe neighborhood in a good school district, without an hour-plus commute.

At the same time, while the trendy thing might be to follow the "one and done" philosophy and only have one child, or "two and through" and have two children, I know a lot of older Millennial families who have three and four children. When I run the numbers on college for my two kids, it deeply concerns me for them. The higher education establishment needs neutron bombs exploded over every campus. It needs a radical rethink. To be fair, for the elite universities, neutron bombs will not be enough. Break out the Castle Bravo bombs.

For many decades, the median universities imitated the elite universities. The "Harvard Case Method" became the norm for MBA programs around the world. Berkeley's David Patterson's "Computer Architecture" textbook was the standard for computer engineering taught in every engineering school. Education is simply the transmission of information. There is the information, the transmitter (the professor), the medium (the classroom at the university), and the receiver (the student). The only difference between MIT and State Tech is the quality of the human components. MIT gets the top one tenth of one percent of the high school students looking to study engineering, and has the top professors in the world. But it's the same laws of Newton.

I first noticed the potential of technology to accelerate education twenty years ago. I read an article about the U.S. Army's Command and General Staff College's distance learning program. The U.S. Army's Command and General Staff College is a master's level program for Army Majors and other service O-4 grade military officers. A small percentage of top performers attend in residence. The rest attend via distance learning while continuing their full-time jobs. In the late 1990s and early 2000s, some innovate faculty members for the distance learning program determined the Internet, and Internet streaming provided a means to host and deliver live stream or recently recorded content. They were able to get top guest lecturers such as Colin Powell and Newt Gingrich. The in-residence school was incensed, because they could not get this caliber of guest lecturers, who would have to fly in Fort Leavenworth Kansas to speak.

The next realization was around 2017 or 2018. I attend a non-denominational, multi-campus "megachurch." I have been attending long enough I have seen the church multi-campus video system evolve. The church is unique in that they purchased a used high resolution video camera from NASA and used it to video the pastor speaking on stage and simulcast it to another auditorium in the same building where it was projected onto a 16 by 28 foot screen on the center of the stage at a one to one scale. The image on the center screen was life-sized, and had a high enough resolution to fool people into thinking the pastor was actually on the stage. This center camera was fixed. The 11 by 19 foot side screens in the main service, used for close ups, videos, etc. were also simulcast to similar screens in the second auditorium, but these used standard television cameras. When the church expanded to a second campus, the remote campus presented the same sermon series on a one-week delay, leveraging recordings of the main campus service. The recordings of the two video streams (main center screen and side screens) were time synchronized and required the use of hard drives for recording. The second campus had an identical projector and screen setup. By the time the third campus was added, WAN links had become powerful enough live transmission of the side screens was possible. A few years later, live network transmission of both the side and center screens were possible for both remote campuses.

Then the Georgia Board of Regents, which oversees all of the state higher education system of Georgia, consolidated many of the various university and junior colleges. Adjacent to my church's main campus was a remote campus for Georgia State University (a four-year institution), and a two-year junior college. These two institutions were merged, and the two adjacent buildings were combined into a common campus. Driving past them every Sunday, I thought: "There is no reason they cannot use the same technology our church does to "beam" a class from the downtown campus into the local suburban campus. Just have a graduate assistant in the local classroom to handle roll, test proctoring, etc." I thought back to my time as a undergraduate and graduate student in 1989 and 1990 when my university started a distance learning program using VHS video tapes. A handful of classrooms were remodeled. The classrooms were soundproofed, and cameras were added. One camera was in the ceiling, and zoomed into a drawing pad which replaced the transparency projector. Instead, it was projected via television onto a screen in the room. The VHS tapes featured a split-screen of the professor and the overhead camera. Two feeds. Just like our church. The technology is there. You do not need the life-sized center screen. You just need one camera on the lecturer, and one video feed of the whiteboard or PowerPoint presentation. Simply web chat would allow students in remote classrooms to ask questions of the professor. High quality digital video cameras today are very inexpensive. Internet bandwidth is cheap. SD-WAN over the Internet means dedicated WAN links are no longer required.

Education was ripe for a rethinking well before COVID. We have Khan Academy and other concepts. When the pandemic hit, and people could not go to testing centers for IT certifications, individual human proctors watched testing students via the student's laptop cameras to make sure they were not cheating. Today AI could easily do this.

The reality is, the Berkeleys and the MITs and the Harvards and the Georgia Techs could easily triple or quadruple their enrollment overnight if they wanted to. We could cut the number of professors needed in half tomorrow. We don't even need grad students as teaching assistants and proctors. Upper class undergraduates could do it for freshmen and sophomores, and master's students for upper classmen. We could have done this a decade ago.

Then there is the insane expansion of administrative positions in higher education. While every other industry has seen a collapse in overhead positions (just try to find a typing pool in the headquarters of a Fortune 500 company, or a secretary supporting a mid-level manager), higher education has seen an explosion of administrative overhead. In some cases, administrators outnumber academics at universities by three to one. AI could easily replace many administrative functions in higher education.

The ultimate goal should be to cut the cost of a bachelor's degree in half in ten years. That is a very reasonable goal. But it will probably also require a complete rethinking of the over a century university accrediting system. Accreditation is a form of regulation, and regulation is always an impediment to innovation. An obvious approach would be to, instead of accrediting universities, to accredit individual courses and curricula. I recall when my university was threatened with losing its accreditation due to violations in the athletic department (which should be the purview of the NCAA, not the accrediting association), and a loss of confidence in the university president at the time by some in the faculty (the university president being an administrative role, and not an academic role). This terrified the students, who were told if accreditation was lost, their degrees would be permanently worthless. What, pray tell, does the football team, or a university president getting on the wrong side of some professors, have to do with Aerospace Engineering?

Given the speed at which technology changes, and the requirement for lifelong learning, less formal education, more apprenticeship, and more on the job training make much more sense in the 21st century. One can argue the western university model is a millennium old. At a minimum, using the the original Morrill Land-Grant act, passed in 1862, as the start of the modern U.S. university system, our university model is over a century and a half old, roughly in line with the Second Industrial Revolution.

The time is ripe for a new experiment in post-secondary education. And that will require slaughtering some sacred cows.

AI and other technology will not build more housing, yet. But it can optimize traffic patterns, making more distant suburbs more valuable. We already go to the store less. I needed an Ethernet cable and got in my car to drive 25 minutes to Micro Center to buy one. Then, sitting at a traffic light I just ordered one on Amazon from my phone, and instead of turning left, I did a U-turn and went home.

What AI and other technology can do is make the upper middle class more productive and raise them to the upper class. What AI and other technology can do is make middle class more productive and raise them to the upper middle class. What AI and other technology can do is make the lower middle class more productive and raise them to the middle class. And it can take the working class and raise them to the lower-middle class. And it can do this while simultaneously lowering the costs of health care and higher education. It is a win-win-win-win-win-win situation. We are not taking jobs from anyone. With a 1.62 fertility rate everyone in the future is going to have a job. We might create so many jobs the federal government starts paying middle class people to have more kids. Imagine if the federal government offered free higher education to any second and third child born to a couple.

Acceleration of technology will simultaneously increase productivity and drive down costs. Accelerating technology is deflationary. Accelerating technology boosts productivity. This has been true since the discovery of fire and the invention of the wheel.

We must accelerate.

Effective accelerationism is the only philosophy that hold a reasonable promise of returning the U.S. to an economic curve where the middle class thrive.

Effective accelerationism is not something just for the Silicon Valley tech lords or the venture capitalists. It is not just for software and computers. It is also for manufacturing and reshoring. It is for nursing and elder care. It is for teaching and pediatricians.

We must accelerate. Or the middle class will die. Home ownership will be reserved for the wealthy, the vast majority will rent their homes from Blackrock, the federal government nationalizes the failed public university system, and dictates who gets to attend. If you think university admission is political now, just wait until you have to contribute to your local U.S. representatives re-election campaign to buy your child a chance to attend the middling state technical university.


Thursday, April 11, 2024

More on our "Rare Earth"

I recently saw a social media post that said the recent total solar eclipse is proof of a "fine-tuned universe." That led to someone saying they did not believe in a fine-tuned universe, which led me to realize the phrase "fine-tuned universe" is a turn-off to atheists and agnostics, because it assumes a "tuner."

It is important to note who has used the phrase "fine-tuned":

"The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life." - Stephen Hawkins, "A Brief History of Time"

Hawkins was agnostic at best, and more likely would be described as an atheist, yet he used the phrase "finely adjusted." What has changed is atheists have become more thin-skinned and defensive in large part due to the rise of the defiant "New Atheists."

Regardless, it makes more sense to use the phrase "Rare Universe" instead of "Fine-Tuned Universe." Because our universe does appear to have characteristics that make its formation rare. Of course, the idea of a fine-tuned, or rare universe has to be attacked by those who see rarity as a threat. Personally, I believe there is no such thing as an atheist, there are only those who believe in different gods. Many atheist cosmologists believe in a god of probability, to which rarity is their antichrist. Hence, they torture math to find a way to reduce rarity, or try to prove alternative theories such as the multiverse.

The cosmological multiverse is not the multi-dimensional multiverse of science fiction or comic books, but the idea the Big Bang, during Cosmic Inflation, expanded at such a rate it threw quantum proto-energy and quantum proto-matter particles out far, far beyond our observable universe, where these "clumps" of quantum proto-matter and proto-energy coalesced into a near-infinite number of universes. These other universes may have formed different physical laws as they exited their quantum state into a physical state. Certainly many would either collapse into a supermassive black hole, others would dissipate into a cold mist. Given a trillion-quintillion different universes, of course probability would suggest at least one would form that could support two trillion galaxies, and as luck would have it, it was ours.

But the multiverse theory does not change the fact our universe is rare, instead it accepts it and attempts to prove it. That our universe is not rare is untenable.

Scientists had to hypothesize dark matter and dark energy to explain the balance in our universe, even though neither can be directly observed.

Our galaxy, the Milky Way Galaxy, is not rare. Barred spiral galaxies are the most numerous galaxies, representing about 40% of the galaxies in the universe. However, about 10% of spiral galaxies are "active galaxies", either Seyfert galaxies or quasars. Some cosmologists believe the radiation bursts from active galaxies would prevent life from forming within those galaxies. Subtracting the 10% of active galaxies brings the number of non-active barred spiral galaxies down to about 35%, or roughly one-third of the galaxies in the universe. But why focus on barred spiral galaxies? Barred spiral galaxies are considered the most mature galaxies. The maturity of a barred spiral galaxy means it has enough Population I stars which would have enough metal content to have rocky planets. Population I stars are third-generation stars. It is unlikely for life-bearing planetary systems to develop around second-generation Population II stars, and impossible around first-generation Population III stars. Also, barred spiral galaxies have fewer spiral arms, and more space between their spiral arms, which means they have a larger galactic habitable zone, but even more important, it means there are more areas in the galaxy that are not so crowded with stars that interstellar cosmic rays and nearby supernovas would disrupt life on nearby planetary systems. If barred spiral galaxies offer the best opportunity for life-bearing planetary systems, then only about one-third of the galaxies in the universe have a high potential for life.

Our Sun is rare. It is a G-Type star. Only about 7% of the Milky Way's stars are G-Type. G-Type stars have the largest circumstellar habitable zone, a habitable zone distant enough that tidal locking is not an issue, and a low level of x-rays compared to other small star types. The location of the Sun in the Milky Way is also rare. We are in the "suburbs" of the Milky Way, nestled between the Cygnus and Orion spurs, between the two major spiral arms of the Milky Way. We are in a place dense enough to benefit from past supernovas to generate planetary nebulas, but far enough away from the much denser main spiral arms to avoid destructive supernovas and more intense cosmic radiation.

Our Solar System is a rare. It was seeded not only with metals from a past supernova, but also very heavy metals from a past kilonova (the collision of two neutron stars, an exceedingly rare event). Smelting metal is required for intelligent life to advance beyond the stone age. The presence of metals like tin and copper in the Earth's crust allowed metallurgy to develop. The presence of uranium in the Earth's crust made the atomic age possible. Our solar system has a very large gas giant (Jupiter) as the first planet outside of the habitable zone, which reduces collisions from asteroids, comets, and meteors in the inner solar system.

Our planet is rare. It is in the center of the circumstellar habitable zone, has a magnetic field, and has oxygen at an ideal level. It has an axial tilt that provides seasons. The Earth has a large satellite (the Moon) that provides tides and reduces collisions with asteroids. The presence of every stable element in the Periodic Table in the surface of the Earth is unbelievably rare.

Cosmologist Brian Keating, while on the Joe Rogan Podcast in August of 2023, posited the idea if there were eight factors required for life on Earth (in reality there may be tens of trillions), and each factor had a one in a thousand chance of succeeding (but in reality it might be one in a billion), the result, one over ten to the twenty-fourth power, is comparable to one opportunity among every star that ever existed in the history of the universe. Not one star of those currently in the universe, but one star in the entire history of the universe. The universe is currently on its third generation of stars.

It really does not matter if one believes in a cosmic creator or not, to know scientifically, that our existence, and the existence of any intelligent life in our universe, is extremely rare and fortuitous. And it all starts with the fact the Big Bang did not dissipate into a cold mist, nor collapse back on itself.

Wednesday, March 29, 2023

Brutal Efficiency

In a December 2006 interview with CNET, Sun Microsystems Chief Technology Officer, Greg Papadopoulos repeated the 1943 statement by IBM’s then CEO, Thomas J. Watson, that world only needed five computers. Papadopoulos was referring to the large service providers which were just starting to emerge. 2006 was also the year Amazon Web Services, now synonymous with cloud computing, released its S3 storage service and its EC2 compute service.

Papadopoulos also noted the large service providers, due to their scale and their investment in automation, were capable of driving “brutal efficiencies.” The web-scale services (web search, e-commerce, etc.) drove very high levels of utilization, and Papadopoulos believed the service providers would follow that model. That is exactly what happened with the hyperscale public cloud providers. They drive extreme levels of efficiency through secure virtualization and continuous capacity management. As a result, hyperscale service providers are now the standard for IT efficiency.

In the past, these levels of utilization and efficiency have been difficult to achieve in on-prem organization IT. VMware provided the hypervisor software that drove a wave of consolidation and efficiency improvements, but efficiency gains have stagnated since. The inability to operate on-premises organizational IT in a highly efficient manner is a large driver of moving on-premises software to SaaS providers, and on-premises compute to cloud providers. But in most cases, the costs of the “lift and shift” of heavy, traditional applications to the cloud proves more costly than operating them on-prem.

Another issue is current organization sustainability goals require new considerations about IT efficiency. In fact, in some cases, migrating on-prem software to SaaS, and lifting and shifting custom applications to cloud providers is just being done to outsource the electrical consumption for an organization so they better meet their sustainability goals.

But what happens when the two are in conflict? When the cost of running customer workloads in the cloud is higher than on-prem, but there is a desire to maximize the efficiency of IT to meet sustainability goals? The answer is private clouds and on-prem IT must operate with similar efficiency goals as public clouds. Another consideration is if an organization has real-estate consolidation initiatives that mean owned data centers go against the organization’s real estate strategies. This usually means owned IT resources are hosted in colocation facilities. Also, for those organizations looking to build a true hybrid cloud, there is the desire to move owned IT resources to cloud connected colocation facilities. But unlike an owned data center, where there might be plenty of available space, every square foot of a colo costs money. So, improving efficiency reduces colocation costs.

There is another factor driving the need for improving on-prem IT efficiency. Newer, denser CPUs and memory are consuming more power. Straightforward “one for one” replacement strategies will force either fewer servers per rack, or power and cooling investments in the data center. The cloud providers have no problem configuring servers with hundreds of cores and terabytes of RAM, then loading dozens of virtual machines from many different customers on the same server. But many traditional IT shops fear high consolidation ratios due to the “too many eggs in one basket” philosophy. Of course, the number of eggs that can be tolerated in one basket does grow over time, but not at the rate of Moore’s Law.

IT organizations need to look at VMs and servers the same way storage administrators looked at thin-provisioning on all-flash arrays. When less expensive hard drive systems dominated organizational data storage, it was easy to just thick provision everything. After all, it ensured performance and minimized issues and management efforts. But all-flash was considerably more expensive per TB, so thin-provisioning was necessary. Performance of all-flash was not an issue, so thick provision eager zeroed VMs, done to maximize performance as data in a VMDK grew, was no longer necessary. But it did impact management. Thin-provisioning was scary. What happened if something went wrong? What happened if there was a runaway data writing process? Could it fill the capacity of multiple thin-provisioned volumes and take down multiple apps? But for the last 8 years, all-flash arrays have been used and managed within IT organizations. At its optimum, it means a thin provisioned VM on a VMware datastore, on a thin provisioned LUN on the storage array. So, there is an experience base in “thin everywhere” and “thin on thin” (VMware thin provisioning on storage array thin provisioning) operations.

With each generation of CPUs increasing low-level virtualization features, and increased instruction level parallelism, both at a Moore’s Law rate that exceeds the growth of software ability to consume it, we should be seeing higher vCPU to core ratios. But increasingly, we are lower vCPU to core ratios due to the desire to avoid performance issues. While VMware memory sharing (transparent page sharing) is not used often due to security concerns, VMware memory overcommit features are safe and well understood, but are likely underutilized. While memory sharing is off by default on ESXi, it is on by default in VMware Cloud on AWS, as is ballooning and memory compression. VMware Cloud on AWS seeks to drive very high levels of efficiency. In essence, there are equivalents of “thin provisioning” virtual CPUs on physical CPU cores, thin provisioning virtual RAM on physical RAM, and thin provisioning virtual networks on physical networks. Another term for thin-provisioning in these cases is oversubscription, and we manage oversubscription with tools like QoS. Tools similar to QoS exist in storage (VMware Storage I/O Control, storage array QoS, etc.), CPU, and memory as well (VMware resource allocation shares, reservations, and limits, etc.). But we need deep visibility into storage IOPS, CPU usage, and memory consumption if we want to drive higher levels of oversubscription in these resources. But we must if we want more efficiency.

CPU, hypervisor, and network consolidation and virtualization features have increased dramatically over the last decade, affording business IT customers the opportunity to significantly increase consolidation, including higher vCPU to core ratios.

While lower than 50% CPU utilization at the host level is typical in VMware environments, it is also not unusual to also see VMs over-configured with vRAM. This often is due to ISV recommendations, which are often over-specified to ensure expected performance.

What is needed is visibility into the virtual and physical infrastructure to identify inefficient configurations, adjust them to eliminate inefficiencies and drive higher levels of utilization. A visibility tool must constantly monitor the environment, because after an initial “right-sizing”, reducing allocated resources to only what is needed, resource requirements may change and may grow, requiring later adjustments. The good thing is IT management tools have improved significantly over the last decade to allow efficient “as a service” approaches to business IT.

The incremental improvements in IT efficiency of the past are no longer sufficient. The potential for significant improvements in IT efficiency now exist. When properly implemented, the right tools allows both lower costs and the achievement of sustainability goals.

Tuesday, August 30, 2022

When did pessimism become a virtue?

Pessimism and worry have become virtues. Too often the pessimist is a pessimist because he or she feels being pessimistic represents clear-eyed, honest, realist, and educated conclusions. The pessimist looks at the optimist and assumes the optimist is naïve, deluded, ignorant, or worse they just don't care. The kids have come up with a term for this: The "doomer".

"Doomer" Wojak meme

This is not ideological. It happens on the left and the right. Some on the left are pessimistic about the future due to climate change. Some on the right are pessimistic about the future due to crime, or morality.

Related, somehow we got to a point that equated worry with care. We reached a point where the more concern someone has for a problem, the more worry a problem causes them, it must be because they care more. And those who are not worried, those who are ... carefree ... must be so because they do not care about this important thing that we all should worry about.

He is not a crank, he just cares more than you do.

The problem with this is worry leads to pessimism leads to apocalyptic worry. Environmentalists like Greta Thunberg, Extinction Rebellion, and the Sunrise Movement all represent apocalyptic movements. They literally believe the world is ending. In that sense, they are not very different from the apocalyptic movements of Jim JonesPeoples Temple or the Heaven’s Gate movement of Comet Hale-Bopp fame. And we know how those two ended.

Apocalyptic movements are dangerous. Moral calculus changes when you think your remaining time on Earth is measured in a short period of time. That is why eco-terrorism is a thing. That is why Earth First! literally tried to kill people.

Likewise, on the socially conservative right, we see people who think the decline in morality portends the end of times. Increasing sexual liberation, starting with the sexual revolution, the legalization of abortion, continuing with the gay rights movement, and now the transgender rights movement, triggers more concern. In the past there has been nihilistic right-wing terrorism such as Timothy McVeigh bombing the Murrah Federal Building. We have seen right-wing terrorism in the form of attacks on abortion clinics and assaults and murders of abortion providers. It may actually be a positive that the Supreme Court overturned Roe v. Wade because it may defuse some of the more violent tendencies of the most extreme elements of the socially conservative right. But at the same time, it now incites elements of the political left.

Of course there is worry that is purposeful, just like fear (worry is a form of fear). Worry is why we strap our children into car seats, put safety caps on electrical plugs when we have toddlers, make sure our smoke detectors in our homes work, express concern when our children struggle in school, and polish our resumes when our employer has severe economic struggles. This is not the healthy worry I am speaking to. I am talking about existential worry the future is lost.

Similarly, Abrahamic theology suggests worry is a negative emotion, which while not directly a sin, leads to sin because it reduces faith in God. Even a secular person should realize worry is corrosive.

And don't get me wrong. I probably have more concern about the future than most. I am concerned about a looming energy crisis and a grain shortage that together might cause a depression in Europe and famine in the developing world. But that to me is concern, not worry. I get more worried about missing a flight than $1,000 a month energy bills. At least for now. Come back in January, 2023 and maybe I will be worried.

What led to me writing this? It was the last few episodes of the the All-In Podcast. Co-host and venture capitalist David Friedberg lamented the continuing topics of inflation, gloom, and doom. Friedberg is the eternal optimist on the podcast, and his optimism is both refreshing and motivating.

Optimism is a virtue. Pessimism is not. Nor is worry.

Friedberg is interested in things like nuclear fusion, carbon capture, new food growing technologies, etc. Just today I saw an article on another breakthrough in manufacturing milk from precision fermentation. This is not "nut milk" or other substitutes. This is a molecule for molecule equivalent beverage, where a fermentation process replaces biological processes. These kinds of breakthroughs, along with other bioreactor technology, may be the very thing that solves the agricultural greenhouse gas problem. These are the technologies that interest Friedburg.

Likewise, I read a story where some people working on geothermal energy are looking at using the powerful lasers used in nuclear fusion to improve their drilling capability. The big deal here is this would allow deep drilling everywhere, so you could literally drill under an existing coal fired power plant and turn it into a geothermal plant. Cheap and plentiful geothermal made possible by laser-enhanced drilling. Nuclear fusion research dollars paying off with clean energy from another source.

Then there is carbon capture, or more accurately carbon-dioxide capture. This would be game-changing, because not only would it solve the CO2 problem, it would provide a source of CO2 that could be used to make renewable synthetic fuel.

There are many reasons to be optimistic. There are more reasons to be optimistic than pessimistic. Riva Tez noted "If you convey an optimistic idea about how the world can be, you’re also suggesting that there’s a responsibility for us to be able to get there", and that pessimism "reduce[s] people's agency to solve problems because there's no point."

This is why we should default to optimism. Even if pessimistic, one should "fake it until they make it" with optimism, if only to motivate the best out of others. That is virtue.

UPDATE, March 2023

Two excellent Substacks have been published in the last month which also speak to this. The first is by Noah Smith on February 22nd:

Don't be a doomer

The second is by Sanjana Friedman on March 3rd, posted at Mike Solana's Pirate Wires Substack:

Collapse Support: The Doomsday Prophets of Reddit


Thursday, March 24, 2022

UFOs are a Thing Again

Just over two years ago the US Navy released video taken in 2004 of what would later be called the “Tic Tac” UFO. The UFO intercepted by the US Navy rekindled interest in the idea of extraterrestrial visits. Most people attempt to consider “close encounters” with extraterrestrials in terms of assuming the UFO represents a highly advanced, interstellar capable, extraterrestrial civilization and working backwards. They assume the UFO is extraterrestrial in origin, represents an advanced civilization, and attempt to assign cause and meaning to it.

I suggest a different approach. Instead of an advanced, interstellar spacefaring society, and working backward (“Why would they explore us?”), I suggest we start with our own, non-advanced, not interstellar spacefaring society, and work forward.

First, we have to ask, what would it take to be interstellar spacefaring? A good first step is harnessing enough power for interstellar travel. That means we have to first assume harnessing power is feasible. For this mental exercise, I will always assume such advanced technology is feasible, after all, the presumption the Tic Tac UFOs are extraterrestrial suggests interstellar travel technologies are feasible.

The first technology is harnessing nuclear fusion in a net-positive way. One has to ask, when will nuclear fusion be harnessed by humanity? In 20 years? 50? 80? With each increase in time, the probability increases. While the joke is nuclear fusion is always 20 years away, 20 years from now might be optimistic. We could assign an 80% chance. In 50 years, perhaps the chance is 90%. By the dawn of the 22nd century, it might be reasonable to say the probability is over 99%.

The next technology to consider is faster than light travel (FTL). The current FTL concept that gets the most traction is the Alcubierre Drive, a concept proposed by physicist Miguel Alcubierre. The Alcubierre Drive is a “space warp drive” concept, which creates a sort of of local wormhole, which is being researched by NASA. The problem with the Alcubierre Drive is it requires impossible amounts of energy. The energy requirements have come down as physicists have refined the concept. Regardless, the power required will likely be significant.

That brings up the best known conceptual energy source more powerful than nuclear fusion: Antimatter. It appears Gene Roddenberry was onto something when he proposed the idea of a “space warp drive” powered by antimatter in his 1960s Star Trek television series.

Let’s assume the same timeline for both harnessing antimatter for power and refining an Alcubierre Drive. Do we believe it is possible in 180 years? How about 280? 380? How about 480? What if we assume an 80% chance for 180 years, 90% for 280, 95% for 380, and 99% for 480? By the dawn of the 26th century, we will likely be an interstellar species.

But even if we assume 280 years for perfection of FTL travel, the dawn of the 24th century, we can extrapolate some ideas.

The first is, any FTL spaceship in the 24th century likely would not require a human crew. The concept of “the singularity”, a point where General Artificial Intelligence passes the Turing test, and where humans will be augmented physically by machines and cognitively computers, will likely occur sometime before the end of the 21st century. Computation will be limited by Moore’s law (which will become a hindrance, rather than an enabler in the future), but new forms of computing, such as quantum computing, are likely to emerge. Nanotechnology is another area ripe for advancement over the 21st century. And biotechnology could impact both computing and nanotechnology.

While Gene Roddenberry got warp drive and antimatter right, he clearly go starship crew sizes very wrong. Instead of crews of 400 (the “Original Series”) or 1,000 (Star Trek - The Next Generation), a crew of a few dozen would be more than adequate. The idea of a holographic doctor (Star Trek Voyager) on board a starship is much more realistic in a 24th century timeline. But realistically, small, fully autonomous starships make more sense.

Keeping the mass small would greatly reduce the power required. Building a starship “as small as possible”, but “as large as necessary” would be the most likely path. This means the size would be dictated by the largest component, as nanotechnology would shrink things like control computers, sensors, recording devices, etc. to very small sizes. Most likely, the FTL drive and antimatter containment system would dictate the size of the spacecraft.

While Star Trek solved the long-distance communication problem with an FTL communications mechanism, this is likely the most difficult problem to solve. More likely FTL spacecraft will be like ancient sailors of old, requiring a return to their home port to tell of their tales. This would be another case for very small FTL spacecraft, which would likely be capable of higher FTL speeds due to their low mass.

Another unknown beyond fusion, Alcubierre Drives, and antimatter is the domain of harnessing gravity. Physicists believe gravity, like light, exists as a wave and a particle (graviton), but gravitons are yet to be discovered, and therefore harnessing them is not possible yet. But assuming it is possible to harness gravity, it could dramatically change slower the light travel. It would it be possible to protect the inside of a spacecraft from acceleration forces. It could allow craft to reflect gravity, to travel like an air hockey puck, changing direction at high speed. Not by being hit, but simply by pointing a graviton beam. This would be especially useful not only for traveling within a solar system, but within a planetary gravity well, that is, landing or flying close to a planet. Instead of fusion based thrusters, needing hydrogen fuel proportional to the gravity of a planet, a lander which harnessed gravity might be able to reflect the planet’s gravity in a way that would make it work at any level of gravity.

I think you can see where I am going with this. A very small robotic spacecraft, capable of flying into a planet’s atmosphere, harnessing gravity so its is capable of moving without the need for visible exhaust or aerodynamic surfaces … it sounds like a Tic Tac UFO.

The Tic Tac UFO makes more sense than the ship from Close Encounters. A mothership is possible, but it would likely be small, and its smaller ships would be even smaller. “Greys” or similar alien species would stay home. “Alien abductions” would have to be done by robots, but honestly, if we assume Star Trek tricorders and medical scanners will be real, why would there any need to abduct a human to learn about their biology?

Do I believe the Tic Tac UFO is an extraterrestrial spacecraft? I don’t know. But I do think in 300 to 500 years, if interstellar travel is possible, human beings will produce small, autonomous interstellar spacecraft, not giant starships of today’s science fiction.

And I think this approach of projecting forward gives a better idea of what to look for than attempting to project a cause onto a phenomenon.

Saturday, January 01, 2022

Why Energy is Everything

The story of the last several centuries has been one of the benefits of humanity's technological advances moving faster than the negative effects.

However, the positive impacts are bursty. The negative impacts tend to be linear. It feels like we are past due for a big positive burst, however that in part is because we failed to leverage past positives.

Explaining this, energy transitions, say from wood to coal, and coal to petroleum, are bursty in nature. Accumulated pollution is linear.

The single biggest mistake humanity has made in the last 75 years was not to be more aggressive with the implementation of nuclear power generation. Only France got it right. My lifetime of just over a half a century has been marked by a near-constant series of moral panics related to energy.

One of my early memories is the 1973 energy crisis. This was triggered by the Arab Oil Embargo in response to the U.S.'s material support of Israel during the Yom Kippur War (Operation Nickel Grass).

The Arab Oil Embargo only lasted from October 1973 to March 1974. But the era of the never ending "Energy Crisis" had started.

Energy is everything. Einstein stated energy and matter were the same thing, or another way of looking at this is they are interchangeable. We typically think only of converting matter to energy (burning fuel). However, most of what is created uses energy. We smelt metals and create alloys. We saw and press lumber. We create plastics and other synthetic materials. We lithograph semiconductors. We bend, weld, shape, and assemble metal. We write and test code. We do research. We create medicines. All these acts of creation of products consume significant energy. And as technology increases in capability there will be more need for more energy. There is a very real possibility our ability to innovate will be limited by available energy.

I recently read a short book by Michael Denton, “Fire-Maker". It is an Intelligent Design apologetics book, so it will not appeal to all. But it points out something extremely important: When it comes to civilization, energy is everything. Everything.

Pottery, glass, metallurgy. The ability to take one form of matter and transform it. Fire made that possible. Combustion, or more accurately, creating heat, is the most important form of energy we have.

All of the matter we transform, from mashing potatoes to cutting timber to making steel, to making Portland cement to hammering, screwing, and welding requires energy.

There is energy content in everything, and most of that energy was from heat.

The bigger the transformation of the matter, the more energy is required. And small technology like semiconductors require significant energy.

The more matter we transform, the more energy is required. A growing world population requires more energy.

Energy is everything. It is not just the fuel you put in your car, it is in all the steel, aluminum, plastic, and electronics in your car.

Energy is everything.

We have seen increasing energy consumption driven by computation. At the same time, there have been significant improvements in energy efficiency (i.e., LED lighting), and concepts like cloud computing promise more efficiency. While these efficiency improvements free up energy for other uses, we still are often energy constrained at times (summer heat waves requiring rolling brownouts, etc.).

We have done a great job with energy efficiency. But there are diminishing returns with efficiency. We probably have made most of the gains we can. Certainly, today’s household appliances are much more efficient than those from 50 years ago. Heat pumps went from marginal technology only useful in the Sun Belt to very viable across most of the U.S. Air conditioners and refrigerators are much more efficient, despite being forced to use less efficient refrigerants due to the freon ban. But how much more is possible? We progressed from incandescent, to compact fluorescent, to LED lighting. Tyvek wraps and much better insulation mean houses are much more thermally efficient. Smart thermostats also help. But the marginal efficiency gains beyond 2020 are not likely to see the efficiency gains of the last 50 years.

Another factor which appears to have stabilized is the demand for larger and larger homes. We have seen growth per square foot throughout the 20th century, but that seems to have stalled, and may have reversed, in large part due to decreasing family sizes. There has also been a rise in blended families with the Baby Boomer generation, but those blended families live in the home only for a few years, then the children age out. Large Boomer homes intended to provide plenty of space for holiday gatherings with adult children and grandchildren are declining in demand, and it is likely the late Boomers and early GenXers will leverage nearby hotels and Airbnb to provide the temporary living space for holiday gatherings of family members.

At the same time, early and even younger Millennials are starting to move to the suburbs as they marry and have families. Another factor to consider, especially with the COVID-19 pandemic, is the need for one or even two home offices. But with an average of fewer than two children, at most five bedrooms will be required, and more likely four bedrooms will be the norm, but with a basement or some alternative space for a semi-permanent home office.

Outside of the US, the COVID-19 pandemic forced work from home may change the desired home. Europe is known for space-efficient, compact housing. Many European countries have very low birth rates. In the past, a two-bedroom flat might meet the need. But there may be a demand for larger apartments.

This could drive more energy consumption in Europe, up from current levels.

Then you look at the industrial/business demands for energy.

As Moore’s law’s marginal gains decline, the energy efficiency improvements of computing will decline with it. At that point there will be a rise in energy requirements for computing.

The demand for data analytics and AI will drive the demand for more energy.

The demand for more robotics and automation will drive the demand for more energy.

The demand for electric vehicles will drive the demand for different energy (electricity vs. petroleum).

The demand for autonomous vehicles will drive the demand for more energy.

The demand for more granular autonomous vehicle services will drive the demand for more energy.

The demand for more granular autonomous delivery services will drive the demand for more energy.

The 21st Century lifestyle will demand for more energy than the 20th Century lifestyle.

And the developing world transitioning to a 20th Century lifestyle developed world will demand significantly more energy.

This last point is very important.

What happens when everything becomes digital?

I would bet between 2030 and 2050, a country’s economic success and foreign policy influence will be directly proportional to the percentage of their energy derived from nuclear power.

Also, the first nation to achieve nuclear fusion power at scale will likely propel itself into a global economic advantage.

The cost of AI, robotics, automated manufacturing, and autonomous military drones will plummet for a fusion-powered nation state.

There is a common saying by those in the nuclear power industry: "Fusion power is only 20 years away." Over the decades, the addendum: "This time, we really mean it." could be added. However, if cost-effective, utility-scale fusion power really is 20 years away, that is roughly 2040. And given fusion should have much lower accident and security risks, one can assume the regulatory environment will be less, meaning it will be possible to build fusion power plants much more quickly than fission plants. So, if fusion is readily available by 2040, and fusion plants are easy to build, it could be a significant percentage of a nation's power generation by 2050. Fusion would be a state-of-the-art technology in 2050.

The effect of scale fusion power is the promise of order of magnitude deflation of energy costs. As all economics happens on the margins, the marginal price of generating a watt of power collapsing to near zero (after the sunk capital costs of the fusion plant) would cause significant pressure on all other forms of energy generation, which would only accelerate the adoption of fusion.

Then you must ask: "What other technologies could be state of the art by 2050?"

Obviously, artificial intelligence (AI) will be much more mature in 30 years. It is possible artificial general intelligence (AGI) will be available. Additive manufacturing (3d printing and similar technologies) will likely have matured to the point of being standard. Advancement in robotics is governed in large part by advancements in AI. Nanotechnology is unrelated to robotics, but benefits robotics greatly.

Additive manufacturing, AI, robotics, and nanotechnology will lead not only to fully automated manufacturing, but to "programmable manufacturing", "tooling as code", software-defined manufacturing in programmable factories. Factories that can be changed on the fly to manufacture different things. This has the impact of amortizing capital over a much broader output. It means the classic desire to offset the cost of expensive manufacturing equipment by leveraging locations with low land, regulatory, and construction costs will not have the same weight. The full automation of manufacturing also means the most significant variable cost becomes not labor, but energy. The possibility of small nations with limited populations not being able to have a significant manufacturing output would no longer be true, if that nation has access to low-cost power.

But for larger, more capable states, the potential of very low-cost power is more significant. The ability to rapidly manufacture military equipment in large numbers–autonomous drones, cruise missiles, combat robots–would be a national capability like naval shipbuilding.

Cheap energy has always been an enabler of significant national strength. The UK's and US's access to large coal reserves drove both country's industrial revolutions, industrial power, and national power. Coal allowed the emergence of blue-water navies. Coal was also critical in steelmaking, which was a key component of industrial products as well as military weapons. Large US petroleum reserves were important in driving military armor and airpower which were critical domains of WW2.

The lack of large amounts of low-cost, zero-CO2 energy is the limiter of progress in areas like AI and other key technologies. One only needs to look at the energy consumption of cryptocurrencies to get an idea of the energy consumption of AI computation.

Raw materials are still required. But robotics and automation offer the potential for not only lower-cost extraction, but precision extraction, and more difficult, risky, and dangerous extraction. The ability of robotic mining to extract more ore from more difficult places means robotic mining offers the potential of much larger defined elemental reserves, which will drive down the cost of these commodities.

As all capital (property, plant, equipment) is manufactured product, produced from raw materials with labor, then robotics, automation, lower costs of raw materials, and low-cost energy will result in lower cost capital equipment. Automated robot manufacturing plants built by automated construction robots will produce lower-cost automated manufacturing robots. It will be a virtuous circle.

Lower cost raw materials, near zero cost of labor, low cost of energy, and the impact these trends will have on lowering capital costs means manufacturing complex products will drop significantly.

Ultimately, the limiting cost is energy, and that is where fusion power comes in.

Regarding intermittent renewables, such as wind and solar, they will require significant energy storage to substitute for base-load power. While solar and batteries are falling in price, the primary reason for that price decline is low-cost labor. 80% of solar panels are manufactured in China. There is the very real aspect of China's human rights violation which rise to the levels of slave labor and genocide. Modern lithium-ion batteries are dependent on cobalt, most of which is mined in the Democratic Republic of the Congo using child labor, another human rights violation. While efforts are being made to remove the dependence on cobalt in modern batteries, one also must consider the sheer scale required for battery storage of intermittent renewable generated energy.

Certainly pumped-storage hydroelectricity (PSH) is an effective means of energy storage, and has been used for over a half a century, but the best locations of utility-scale solar plants is not conducive to PSH. PSH could make sense for utility-scale wind power in some locations. But the biggest limitation of PSH is the need for specific land resources, and the environmental impact.

Regardless of the method, energy storage is subject to the laws of thermodynamics, which says there will be efficiency loss. More energy is required to store the energy than the energy stored, or the energy produced from the storage.

In the near-term, intermittent renewables provide power generation that can contribute to the grid while demand is managed with dispatchable power sources such as hydroelectric and natural gas. Solar, which peaks during summer afternoons, is ideal to offset the increased demand for air conditioning during that time.

However, there was a recent study which found photo-voltaic cells degrade much faster than originally expected. This means a solar panel assume to maintain 90% of its capacity at 20 years might drop to the 90% threshold before 13 years, and be as low as 83% by 20 years. This would either necessitate more frequent panel replacement, or larger solar power facilities.

Another aspect of the developed world's economy is it is increasingly moving towards 24-hour operations, meaning nighttime energy demands are increasing and will continue to increase. In some locations, winds increase in the evenings and night (such as California's Sacramento-San Joaquin Delta "Delta Breezes"). However, long-term changes in climate may mean reductions in wind. Sacramento-San Joaquin Delta Breezes have declined significantly over the 20 year period from 1995 to 2015.

It looks increasingly likely it will require significant planning and investment will be required to build utility-scale renewable power with storage. Due to higher than anticipated degradation of solar panels and changes to wind patterns, larger and more distributed solar and wind farms will be required. And that assumes concerns about human rights violations in the case of solar panels and batteries and environmental impacts for all forms of renewables and storage can be overcome. Even if it is overcome, significant dispatchable power will be required to account for scenarios where storage is insufficient. The next phase will cost more. The low-hanging fruit of renewables has been picked. Solar in sunny places with cheap land, wind in windy places with cheap land. Cheap dispatchable natural gas plants instead of storage.

And the reality is, the moment fusion becomes viable, utility-scale renewables will be obsoleted in the developed world. And the first one there, wins the race not to "Net Zero CO2" but "Zero Cost Energy", or more accurately, nearly free energy.

There are synergies between current generations (II, III. and III+) of fission nuclear power plants, fourth generation nuclear power plants, and future fusion plants. Some Gen IV fission nuclear reactor designs use earlier generation's nuclear waste as fuel. Replacing one old reactor with a new Gen IV reactor, or adding a reactor to an existing nuclear power plant, allows on-site reprocessing of nuclear waste. The waste from these reactors is much lower in radiation. Also, when fusion becomes viable, it will require hydrogen for fuel, and extracting hydrogen from the super-heated steam of the water used to generate electricity in fission plants takes less energy than from room temperature water. So putting a fusion reactor next to an existing fission reactor makes some sense.

Wind and solar, and especially solar, are inherently decentralized, and hold tremendous promise for the developing world, much of which is located either in equatorial regions or in the sub-tropical regions of the southern hemisphere. The tropical and subtropical areas are the best locations for solar power. For small, decentralized villages in tropical and subtropical regions of the developing world, solar power and battery storage make sense. For isolated locations above 45 degrees latitude, wind power and battery storage make sense. There are some exceptions, desert regions outside of the tropics and subtropics with consistent sunshine, and areas in the subtropics and below 45 degrees with steady winds.

What about high-density population areas in the developing world? Where existing fission plants are not a factor due to security and anti-proliferation concerns, fusion holds the promise of no weaponizable fuel, and no risk of meltdown, along with the associated lower costs of security and containment.

But for developed nations with significant manufacturing and scale agriculture, energy will be everything. And that will require a significant increase in energy generation, which will likely require technologies beyond renewables. Technology marches on. AI, robotics, and nanotechnology march on. Autonomous manufacturing, farming, and distribution are coming. First to fusion matters. First to fusion wins.