Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Republicans are scrambling to adjust their tax bill to satisfy concerns of lawmakers worried about budget deficits. The Joint Committee on Taxation (JCT) released a report yesterday finding that the government would lose $1 trillion over a decade in revenue even including the dynamic growth effects. Numerous economists (e.g. here and here) have criticized the JCT’s modeling for undercounting the growth benefits.

Still, the JCT’s $1 trillion figure is within the $1.5 trillion that Republicans allotted themselves for the tax bill. So it is surprising that some senators are moving the goalposts and demanding more revenue. But adding triggers or tax increases makes little sense because rising deficits in coming years will be mainly driven by rising spending, not revenue shortfalls—with or without a tax bill.

The chart below shows CBO’s baseline projections of spending and revenues as a share of gross domestic product (GDP). Revenues (red line) are set to rise from 17.7 percent of GDP this year to 18.4 percent by 2027, while outlays (blue line) will jump from 20.5 percent to 23.6 percent. Revenues creep upwards partly because of “real bracket creep” as people move into higher tax brackets. The chart clearly shows that deficits will rise because of rapid spending growth.

The green line shows revenues including the JCT’s projected effects of the Senate tax bill. Revenues fall in the short run, then rise as a result of numerous tax breaks expiring, and as the economy expands modestly per the JCT estimate. The Senate bill would add debt in the near term, but by year 10 would begin reducing deficits.

Even with the JCT’s lowballed growth estimates, projections show that spending restraint is the key to deficit reduction. Rather than adding tax increases, deficit worriers in the Senate should work to replace no-growth provisions in the tax bill, such as child credits, with pro-growth provisions, such as further rate cuts. And rather than holding up the tax bill, they should redouble their efforts in the new year to cut discretionary spending and reform entitlement programs.


Note: the green line is rough estimate as the JCT does not breakout dynamic revenue effects from its estimate of spending increases due to a rising interest rate.

It is no secret that Secretary of State Rex Tillerson and President Trump haven’t been getting along. According to the New York Times, the administration has developed a plan to replace Tillerson with current CIA director Mike Pompeo. If ousted, Tillerson would have one of the shortest stints as secretary of state in U.S. history—not the worst consequence of that position, though an embarrassing one for Tillerson, and perhaps the administration. But the most troubling consequence of Tillerson’s departure would be to replace Pompeo with Senator Tom Cotton as CIA director.

To begin with, it’s difficult to believe Cotton is being considered for the position because of his qualifications. Cotton is a freshman senator with no experience in intelligence. Instead, it seems he is being considered for the prestigious role as director because of his “easy” relationship with President Trump. His support for Trump has indeed been unfaltering: he consistently endorses the president’s incoherent foreign policy, and exhibits what seems like blind loyalty rather than objective analysis. For example, on October 9, on The Global Politico podcast, when speaking about Iran, Cotton seemed to indicate that Tillerson and Defense Secretary Mattis should resign if they are unwilling to execute the president’s policies. Trump’s promotion of Cotton also highlights the president’s own desire to surround himself with yes-men who will tell him what he wants to hear.

Second, he supports torture and other extreme interrogation techniques, like waterboarding, and voted against anti-torture safeguards. Cotton has gone as far as to say that waterboarding, currently illegal, is not torture. If Cotton becomes CIA director, he may push to end restrictions around it, which would contradict the assessments of experienced intelligence professionals.

Third, even though his support for the prison at Guantánamo Bay—referred to as Gitmo—is along party lines, it indicates his erroneous thinking on terrorism. Not only does he routinely inflate the threat of terrorism, but his 2015 statement that  “there are too many empty beds and cells there right now” ignores the fact that the prison has served as a rallying call for terrorist groups, and has undermined U.S. counterterrorism efforts worldwide. Also, his support for Gitmo in general is puzzling considering his legal background: he’s a graduate of Harvard law, clerked for a federal judge Jerry Smith of the 5th Circuit, and practiced law at Gibson, Dunn & Crutcher before joining the Army and holding political office. His defense of a detention facility whose very existence and jurisdiction has caused the Supreme Court to step in at least four times raises questions about his positions on the executive’s power during wartime.

And fourth, his commitment to a hawkish foreign policy is unwavering. For example, his opposition to Iran is so strong that in 2015 he penned an open letter to Iran’s leadership, directly contradicting and undermining ongoing U.S. diplomacy. This summer, he said, “The policy of the United States should be regime change in Iran.” As head of the CIA, his hawkish tendencies will likely result in more military intervention, risking similar disasters like the never-ending wars of Iraq and Afghanistan. Also, intelligence should be driven by objectivity and empirical evidence, and when it is not, disasters like Iraq occur. 

In other words, the administration should take pause before appointing Senator Cotton, an overtly hawkish politician, to the coveted position of CIA Director.  

It’s obviously too early to spike the football, but there is a provision in both the Senate and House tax bills that everyone should be able to endorse, except maybe colleges and their athletics departments: eliminating the 80 percent federal tax deduction college sports season ticket holders get when they pay “seat license” fees—often called “charitable gifts”—charged by schools. It’s an absurd deduction that I’ve complained about periodically, and it’s nice to see it targeted for elimination. And in case we need a reminder that this deduction has zilch to do with the “public good” that higher ed so often gives as its excuse for every special treatment it demands, USA Today has reported that this season 12 big football schools alone are on the hook for at least $70 million to buy out fired head coaches. Sounds like a lot of private good there.

These days it seems like we on Team America can’t agree on anything, but we all ought to agree on this: the seat license deduction must go.

This is the first in a series of posts on global temperature records. The problems with surface thermometric records are manifold. Are there more reliable methods for measuring the temperature of the surface and the lower atmosphere?

Let’s face it, global surface temperature histories measured by thermometers are a mess. Recording stations come on- and offline seemingly at random. The time of day when the high and low temperatures for the previous 24 hours are recorded varies, often changing at the same station. This has a demonstrable biasing effect on high or low readings. Local conditions can further bias temperatures. What is the effect of a free-standing tree 100 feet away from a station growing into maturity? And the “urban heat island,” often very crudely accounted for, can artificially warm readings from population centers with as few as 2,500 residents. Neighboring reporting stations can diverge significantly from each other for no known reason.

The list goes on. Historically, temperatures have been recorded by mercury-in-glass thermometers housed in a ventilated white box. But, especially in poorer countries, there’s little financial incentive to keep these boxes the right white, so they may darken over time. That’s guaranteed to make the thermometers read hotter than it actually is. And the transition from glass to electronic thermometers (which read different high temperatures) has hardly been uniform.

Some of these problems are accounted for, and they produce dramatic alterations of original climate records (see here for the oft-noted New York Central Park adjustments) via a process called homogenization. Others, like the problem of station darkening, are not accounted for, even though there’s pretty good evidence that it is artificially warming temperatures in poor tropical nations.

Figure 1. Difference between satellite-measured and ground-measured trends. Artificial warming is largest in the poor regions of Africa and South America. (Source: Figure 4 in McKitrick and Michaels, 2007).

There are multiple “global” temperature histories out there, but they all look pretty much the same because they all run into the problems noted above, and while the applied solutions may be slightly different, they aren’t enough themselves to make the records look very different. The most recent one, from Berkeley Earth (originally called the Berkeley Earth Science Team (BEST) record) is noteworthy because it was generated from scratch (the raw data), but like all the others (all using the same data) it has a warming since 1979 (the dawn of the satellite-sensed temperature era) of around 0.18⁰C/decade. (Computer models, on average, say it should have been warming at around 0.25⁰C/decade.)

They all have a problem with temperatures over the Arctic Ocean as there’s not much data. A recent fad has been to extend the land-based data out over the ocean, but that’s very problematic as a mixed ice-water ocean should have a boundary temperature of around freezing, while the land stations can heat up way above that. This extension is in no small part responsible for a recent jump in the global surface average.

It would sure be desirable to have a global surface temperature record that suffered from none of the systematic problems noted above, and—to boot—would be measured by electronic thermometers precisely calibrated every time they were read.

Such a dream exists, in the JRA-55 dataset. The acronym refers to the Japan Meteorological Office’s (originally) 55-year “reanalysis” data, and it updates to yesterday.

Here’s how it works. Meteorologists around the world need a simultaneous three-dimensional “snapshot” of the earth’s physical atmosphere upon which to base the forecast for the next ten to sixteen days.  So, twice a day, at 0000 and 1200 Greenwich Mean Time (0700 and 1900 EST) weather balloons are released, sensing temperature, pressure, and moisture, and tracked to determine the wind. There’s also satellite “profile” data in the mix, but obviously that wasn’t the case when JRA-55 began in 1958. These are then chucked into national (or private) computers that run the various weather forecast models, and the initial “analysis,” which is a three-dimensional map based upon the balloon data, provides a starting point for the weather forecast models.

Once the analyzed data had served its forecasting purpose, it was largely forgotten, until it dawned upon people that this was really good data. And so there have been a number of what are now called “reanalysis” datasets. The most recent, and the most scientifically complete one is JRA-55. In a recent paper describing, in incredible detail, how it works, the authors conclude that it is more reliable than any of the previous versions, either designed by the Japan Office or elsewhere.

Remember: the thermistors are calibrated at the release point, they are all launched at the same time, there’s no white box to get dirty, and the launch sites are largely in the same place. They aren’t subject to hokey homogenizations. And the reanalysis data has no gaps, using the laws of physics and a high-resolution numerical weather prediction model that generates physically realistic Arctic temperatures, rather than the statistical machinations used in the land-based histories that inflate warming over the Arctic Ocean.

There is one possible confounding factor in that some of the launch sites are pretty close to built-up areas, or are in locations (airports) that tend to attract new infrastructure. That should mean that any warming in those places is likely to be a (very slight) overestimate.

And so here is JRA-55 surface temperature departures from the 1981–2010 average:

Figure 2. Monthly JRA-55 data beginning in January, 1979, which marks the beginning of the satellite-sensed temperature record.

The warming rate in JRA-55 until the 2015–16 El Niño is 0.10⁰C/decade, or about 40% of what has been forecast for the era by the average of the UN’s 106 climate model realizations. There’s no reason to think this is going to change much in coming decades, so it’s time to scale back the forecast warming for this century from the UN’s models—which is around 2.2⁰C using an emissions scenario reflecting the natural gas revolution. Using straight math, that would cut 21st century warming to around 0.9⁰C. Based upon a literature detailed elsewhere, that seems a bit low (and it also depends upon widespread substitution of natural gas for coal-based electricity).

JRA-55 also has a rather obvious “pause” between the late 1990s and 2014, contrary to recent reports.

The fact of the matter is that what should be the most physically realistic measure of global average surface temperature is also our coolest.

Former New York governor Eliot Spitzer, who resigned in disgrace after a 2008 scandal, has written a short essay by way of memoir in the recent 50th anniversary issue of New York magazine. As one who’s written more than my share about Spitzer’s abuses of power as governor and attorney general, I wasn’t expecting to feel much sympathy, and mostly I didn’t. But then I got to the last paragraph:   

I’m a builder now. Most of what my dad built was on the Upper East Side, because that was the heart of the city. Now it’s Brooklyn. We have a site under construction on Kent Avenue in Williamsburg. As a lawyer, as a prosecutor, in politics, there’s a lot of talk. Occasionally things happen. When you’re building, you actually see concrete being poured and curtain wall being applied to the façade. It’s enormously satisfying. I hate to sound like Ayn Rand, but there’s something very rewarding about that tangible productivity.

My reaction was: hold that thought! And pursue it further, maybe even to the point where being a builder—or for that matter sounding like Ayn Rand—involves no trace of apology or embarrassment.

Fiscal rules can theoretically improve policy by eliminating “time inconsistency” among lawmakers. But the proposed fiscal trigger being discussed in the Senate tax reform bill would be a terrible fiscal rule.

You can see the thinking. Several senators worry about tax cuts blowing a big hole in the public finances. If they do not have the desired impact on economic growth, resulting in less revenue than expected, the budget deficit will grow and drive the national debt even higher. Concerned senators therefore seek a mechanism whereby if revenues are lower than expected, tax cuts will be partially reversed.

It’s welcome that some senators take the US federal government’s burgeoning national debt seriously. But there are obvious flaws with this plan (though we do not know details as yet), some of which have been discussed widely already.

First, how exactly will deviations in revenues be judged? An economy is a complex organism, and it is difficult to disentangle how much any change in revenue relative to forecasts is due to changes in tax rates as against other factors. Just look at the debate in the UK. Last week, the Daily Mail newspaper published a report that tax receipts following corporate tax cuts there had been much higher than expected. But experts from the Institute for Fiscal Studies pointed out that much of this was due to a faster recovery of corporate profits in the financial sector and the effects of Brexit, which had little to do with the changing rate. Under a trigger which merely judged revenues against forecasts, a host of things that affect revenues (both upwards and downwards) could be chalked up as the effects of tax policy, potentially resulting in damaging tax rises.

Then, as J.D. Foster notes, there are likely to be other tax policy changes and changes in growth forecasts in future years too. How will these be disentangled and the effects of this specific Act isolated? Will the trigger apply to just a particular revenue stream, such as corporate income tax revenues, or more broadly to capture all the spillovers of any investment boost? If the former, the probability that the trigger will be activated is highly dependent on the accuracy of any analysis of the incentives to incorporate versus operating as a passthrough. In other words, there are huge unknowns here.

Second, the inclusion of a trigger mechanism actually dampens the pro-growth effects of the tax plan, and risks lower-than-expected revenues becoming a self-fulfilling prophecy. Take corporate rates. On the margin, uncertainty about what the corporate rate might be in the long term deters investments today. Less investment today results in lower GDP and lower tax revenues elsewhere in the code. This lower-than-expected tax revenue then activates the trigger (if it applies across total revenues) which raises corporate taxes. There is good reason why economists say that tax policy should provide certainty and permanence in regard to rates. The GOP plan already has a lot of phase outs resulting from the Senate reconciliation rules. The last thing it needs is the risk of more.

Third, you do not need to be a Keynesian to recognize that an unforeseen recession, which would dampen revenues relative to forecast, would be a terrible time to worsen supply-side incentives by increasing the corporate income tax or marginal income tax rates. In truth, Congress would likely override the trigger in such circumstances. But if the trigger would simply be abandoned when it bound, then it suggests it is not a very well-designed trigger! Of course, there could be a recession escape clause, but similar logic applies more broadly if the economy grows more slowly than expected, due to reasons other than tax reform changes.

In short, a fiscal trigger that threatened higher taxes would introduce considerable uncertainty, risk tax hikes at the worst possible time, and could risk tax hikes when other factors resulted in lower revenue growth. 

On November 22, after some reluctance, Secretary of State Rex Tillerson joined the United Nations and United Kingdom in calling the current Rohingya crisis an “ethnic cleansing.” Holding Myanmar’s military, security forces, and local vigilantes responsible for the crisis, Tillerson stated that the United States could pursue accountability via targeted sanctions. While some hailed Tillerson’s label of ethnic cleansing as a start, it’s worth taking a closer look at the politics behind it. First, ethnic cleansing does not elicit a legal response, whereas the labels of “crimes against humanity” or “genocide” do. Second, targeted sanctions are known to be ineffective, so threatening Myanmar with them seems unproductive.   

The current humanitarian crisis began on August 25, when the Arakan Rohingya Salvation Army coordinated an attack on Myanmar’s police and security forces. Myanmar’s military crackdown on the Rohingya population was severe, resulting in a mass exodus that is now called the fastest growing refugee emergency in the world. Bangladesh, one of the poorest countries in the world, is now host to at least one million Rohingya refugees, and relief agencies like the United Nations Children’s Fund are struggling to establish a health system to try to limit malnutrition and the spread of disease. Stories of burning villages, massacres, sexual violence and rape are emerging daily. So, then, why is there a global unwillingness to label the Rohingya persecution by the Myanmar government and military as genocide?

There are three main reasons why “ethnic cleansing” is preferred as a label over “genocide.”

First, labeling a crisis “ethnic cleansing” has no legal implications—and hence is easier for states to deal with. The Convention on the Prevention and Punishment of the Crime of Genocide has declared genocide to be a crime under international law, and defines it as “the intent to destroy an ethnic, national, racial or religious group.” Ethnic cleansing, on the other hand, refers to the expulsion of a group from a certain area, but there is no treaty that determines its parameters. Even though the lines between ethnic cleansing and genocide are blurry, the former requires no domestic and international legal action. The label of ethnic cleansing, therefore, seems like a call for action but in reality is less politically charged, and is more like a “feel good” option for the international community.    

Second, labeling an atrocity as ethnic cleansing is less time consuming. Historically, applying the label “genocide” takes decades. For example, the systematic killing of the Armenian people by the Ottoman Empire in 1915–1917 was first recognized as genocide by the United States in 1975 (though Alabama and Mississippi still do not recognize it). While 28 countries recognize the Armenian genocide, Turkey continues to reject the label. Similarly, the UN recognized the organized targeting and killing of the Tutsis in Rwanda in 1994 as genocide in 2014—20 years after the atrocities. Just last week, Ratko Mladic, the “butcher of Bosnia,” was found guilty of genocide and crimes against humanity twenty years after committing the acts, and after a trial that took five years to conclude.

And third, ethnic cleansing opens the door for the United States to impose specific economic and military sanctions on Myanmar. The State Department is especially interested in pursuing sanctions against Myanmar’s government and military officials who are directly responsible for the atrocities. The problem is that sanctions are typically unsuccessful in changing state behavior, so even if the United States imposed specialized sanctions on Myanmar, it would do little to ease the plight of the Rohingyas. Targeted sanctions could also derail Myanmar’s already slow economic development, which is largely due to decades of oppressive military rule.

Myanmar’s government, military, and Buddhist hardliners not only continue to deny allegations of genocide but have become emboldened in their persecution of the Rohingyas. For example, Myanmar’s authorities constantly link Rohingyas, who are Muslim, to terrorism, taking advantage of the rhetoric of the Global War on Terror that has mostly targeted Muslims worldwide. Though there are some concerns about jihadist recruitment in the Rakhine province, there is little evidence to suggest the Rohingya as a group are a unique threat. Myanmar’s leader, the now defamed Aung San Suu Kyi, even went as far as to blame “fake news” for distorting information regarding the latest military crackdown in the province, calling it a “huge iceberg of misinformation.” Amidst pressure from Bangladesh, Myanmar signed a repatriation program, but has agreed only to take in those refugees who can present identity documents, such as government-issued “white cards.” This of course is an impossible feat for thousands who fled with almost nothing—and will most certainly be unable to produce any evidence of citizenship and/or residency.

The label of genocide, however, is important—and necessary in the Rohingya case. The most significant advantage that genocide has over any other label is that Myanmar’s authorities would come under the jurisdiction of the International Criminal Court (ICC), a special court that investigates and prosecutes war crimes, crimes against humanity, and genocide. Even though proving genocide takes time, there is a great deal of empirical evidence in the Rohingya case. For example, Article II of the Convention on the Prevention and Punishment of the Crime of Genocide outlines five acts that constitute a genocide:

  1. Killing members of the group,
  2. Causing serious bodily or mental harm,
  3. Deliberately inflicting conditions of life calculated to bring about the group’s physical destruction in whole or in part,
  4. Imposing measures intended to prevent births, and
  5. Forcibly transferring children.

There is ample evidence of the Rohingyas being subjected to each of these heinous acts, and human rights groups are rigorously documenting the physical destruction and psychological torment being inflicted on the Rohingyas by Myanmar’s military now.

Not all labels are created equal. This is precisely why it is imperative to use the correct one. In the case of the Rohingya crisis, genocide is perhaps the only label that could give Myanmar’s government and military pause in their persecution, for fear of being tried at the ICC. As for inviting outside military intervention, the label of genocide shares the same risk as the label of ethnic cleansing. What ethnic cleansing does not provide is a legal implication, which is vital. 

Honduras’ presidential election is mired in controversy as the country’s Electoral Tribunal (TSE) suspended the release of results on Sunday night when president Juan Orlando Hernández was trailing left-wing candidate Salvador Nasralla by 5 percentage points, with 58.5% of polling stations counted. There is no precedent in Honduras for such a blackout on the release of election results and many observers are worried—with good reason—that electoral fraud might take place.

First, some context. Juan Orlando Hernández was barred from running for reelection. Honduras’ constitution is famous in Latin America for its repeated emphasis on presidential term limits. It says that any person who has held the office of the presidency cannot be president or vice president again. Moreover, it states that under no circumstance can the constitution be amended to allow for presidential re-election. In 2009, Juan Manuel Zelaya was removed from power by a Supreme Court ruling for organizing an illegal referendum on a constitutional amendment to allow for his reelection.

Then, things changed. In December 2012, the National Assembly—whose speaker at that time was Juan Orlando Hernández—sacked four justices of the Constitutional Court for voting down several government pet projects. In April 2015, the Constitutional Court—with four new justices—struck down the prohibition on presidential reelection claiming it violated human rights. This allowed Hernández to contest this year’s election, even though the popular legitimacy of his reelection bid was always contested.

As president, Hernández built a reputation of a strongman. With the strategic help of the Liberal Party, Hernández implemented much of his economic and security agenda in Congress. Crime has gone down significantly under his watch and public finances have improved. He also became a Washington favorite for his perceived collaboration in fighting drug trafficking. But there are also widespread concerns about his increasingly authoritarian rule and the control he exerts on otherwise independent institutions such as the Supreme Court and the TSE. Tax authorities are fond of harassing businesses and independent professionals.

Polls indicated that Hernández was going to win reelection comfortably. However, on election night Nasralla—who leads a coalition that includes Zelaya supporters—came ahead when the first results were reported. This did not stop Hernández from declaring himself the winner (Nasralla had done the same even before the TSE released the first results). Then came the blackout from the TSE. The head of the electoral body says that the results from rural polling stations cannot be reported until the votes are counted in the capital, (although such a thing did not happen in previous elections). Rumors—spread mostly by members of the ruling National Party—claim that Hernández has overcome Nasralla’s lead and will win by a small margin.

The stage is set for a political crisis. Nasralla’s supporters are already in the streets denouncing a fraud in the making. Their strategy all along was to denounce an electoral fraud if their candidate was defeated, no matter the margin. But now, they have good reason to suspect one. If Hernández is declared the winner, his legitimacy will be very questionable. Honduras could enter a very dangerous period.

President Trump has nominated Alex Azar to be the next Secretary of Health and Human Services. Azar will appear tomorrow for questioning before (and sermonizing by) members of the Senate’s Health, Education, Labor, and Pensions Committee.

Here are 14 questions I would ask Azar at his confirmation hearings.

  1. Is Congress a small business as that term is defined in the Affordable Care Act?
  2. Colette Briggs is a four-year-old girl with aggressive leukemia who is about to lose coverage for the one hospital within a hundred miles that can deliver her chemotherapy. She’s losing that coverage because insurance companies are fleeing the Exchanges. What do you plan to do, what can HHS do, about this problem?
  3. What will you do to prevent drug manufacturers from using the regulatory process to corner the market on certain drugs so they can gouge consumers and taxpayers?
  4. HHS already publishes data on Exchange premiums and insurer choice. Will you commit to publishing a review of the growing body of research showing Exchange coverage is getting worse for many expensive illnesses?
  5. Does HHS have an obligation to encourage young, healthy Americans to pay the hidden taxes contained in the ACA’s rising health insurance premiums?
  6. How will HHS increase its efforts to educate Americans about all their options for avoiding the mandate penalty?
  7. Short-term health insurance plans are an affordable alternative to increasingly costly Exchange coverage. Will you reinstate the 12-month policy term that existed before this year, and allow short-term plans to be guaranteed-renewable?
  8. The previous administration issued rules making it generally unlawful to purchase or switch Exchange plans for nine months out of the year. The Trump administration has restricted this freedom even more, making it generally unlawful for ten and a half months out of the year. Should consumers be free to purchase and switch health plans when they choose, just like any other product?
  9. Will you require insurance companies to repay the “reinsurance” subsidies the Government Accountability Office found the Obama administration illegally diverted to them?
  10. Will you press the Food and Drug Administration to allow the sale of birth-control pills over the counter, without a prescription?
  11. Medicare, Medicaid, and ObamaCare attempt to pay insurance companies according to the cost of each individual enrollee. If those complicated formulas really work, should government just give the money to the enrollees and let them control their health insurance and health care decisions?
  12. Is Obamacare’s Independent Payment Advisory Board constitutional? 

  13. Should seniors be able to opt out of Medicare without losing Social Security benefits?
  14. Will you end government encouragement of “abuse-deterrent” opioids, which have not reduced overdose deaths and are borderline unethical because some are literally formulated to hurt people?

The distribution effects of the Senate tax bill are examined in a Washington Post story. Reporter Heather Long looks at the bill’s Obamacare mandate repeal, based on a new Congressional Budget Office (CBO) study.

A table in the study appears to show, “The Senate Republican tax plan gives substantial tax cuts and benefits to Americans earning more than $100,000 a year, while the nation’s poorest would be worse off.”

But as Long notes,

The main reason the poor get hit so hard in the Senate GOP bill is because the poor would receive less government aid for health care….The CBO and JCT analyses make it seem as if a family is actually getting money taken away from them, but in reality, most of these families making under $30,000 don’t pay any income tax. The credits and subsidies they received to help them buy health insurance were typically sent directly from the government to the insurance company. So these families are unlikely to see any changes to their tax bills.

Along with the CBO table, Long presents a Joint Committee on Taxation (JCT) table sent to her by Senate GOP staff showing the tax bill’s effects without the Obamacare piece. This table indicates across-the-board tax cuts in the early years, but is in aggregate dollars with no context.

I add context in the table below. The first column shows current law individual and corporate income taxes in 2019, based on my estimates discussed here. (JCT does not post online current law income taxes for 2019). The second column shows the dollar cut amounts—without the Obamacare part—from the JCT/Washington Post table.

The third column shows the percentage cuts. Households at the bottom do not pay income taxes in aggregate, so they are “n/a.” Under the Senate tax plan in 2019, middle-income households would receive much larger percentage cuts than higher-income households.

Given the (unfortunate) political importance of distribution tables, the JCT should post a fuller and more neutral set of such tables online.

Border Patrol agent Rogelio Martinez, 36, was recently laid to rest after dying in the line of duty. The cause of his death is a mystery and the government has released few details. A spokesman for the FBI said that Martinez was “not fired upon” but Governor Greg Abbott (R-TX) said Martinez was killed in “an attack.” A spokesperson for the National Border Patrol Council, a government union that represents Border Patrol agents, said that Martinez may have been bludgeoned to death by rocks. Another source claims that Martinez may have perished because of injuries he sustained in a fall down a culvert. More information will hopefully come forward in the coming days and weeks to clear up this mystery. Martinez’s untimely death is a tragedy regardless of the actual cause.

Many politicians, including President Trump, cited Martinez’s death as a reason for a border wall and more spending on security, but policy should rarely (if ever) be changed as a result of single incidents like these. Instead, properly analyzed data about how many Border Patrol agents are murdered in the line of duty should be a starting point so that we can at least see how deadly the occupation actually is. This information is unreported in news stories on Martinez’s death and I couldn’t find it in an online search, so I estimated it from publicly available data. The government records all Border Patrol agent and Customs officer deaths in the line of duty. I went through the deaths since 2003 and excluded Customs officers. That left 33 Border Patrol agent deaths since the formation of Customs and Border Protection (CBP) in 2003 through November 19, 2017 (Table 1). More agents died in 2012 but 2004 had the highest rate of agent deaths at 0.028 percent of all Border Patrol agents or one out of every 3,606 agents on duty that year. From 2003 through 2017, the chance of a Border Patrol agent dying in the line of duty was about one in 7,968 per year.

Table 1

Border Patrol Agent Deaths Per Year

  Deaths Number of Agents Agents Per Death Percent Death 2003
















































































Source: Customs and Border Protection.

I determined the cause of death for each Border Patrol agent from the online blurbs on CBP’s website. About half of all agents who died on duty from 2003 through 2017 died in car accidents (Figure 1). About 15 percent died because of assault or murder and 18 percent due to other health-related accidents such as heart attacks or heat stroke. Most surprising, 12 percent died from drowning in accidents. I counted the death of Border Patrol agent Luis Aguilar as murder because a car driven by a suspected smuggler struck him. I counted the death of Border Patrol agent Nicholas D. Greenig as caused by a car accident because he struck a large animal with his patrol car. Agent Javier Vega Jr. was murdered while off duty but I counted his death as a result of murder because the CBP website records him as dying in the line of duty for this reason:

On September 20, 2016, it was determined that, in light of information identified during the intensive investigation completed by the Willacy County Sheriff’s Department, Agent Vega’s actions were indicative of his law enforcement training and that he instinctively reacted, placing himself in harm’s way to stop a criminal act and protect the lives of others. His death was later determined to have been in the line of duty.

Figure 1

Border Patrol Agent Cause of Death


Source: Customs and Border Protection.

On its surface, the death of agent Martinez seems to confirm the perception that Border Patrol agents have a dangerous job. But the danger of an occupation must be gauged in relation to the danger of other occupations or populations. About one in 7,968 Border Patrol agents died per year from 2003 through 2017. That compares favorably to all law enforcement officers who had a one in 3,924 chance of dying in the line of duty in 2011. Although incomplete data precludes an apples-to-apples comparison from 2003 through 2017, in 2011 the Border Patrol agent death rate was about one in 10,722 that year. In 2011, law enforcement officers were almost three times as likely to be killed in the line of duty as Border Patrol agents were. 

Car accidents account for about half of the deaths of Border Patrol agents during this time. Assuming that the number of 2016 and 2017 traffic fatalities across the United States are the same as they were in 2015, an American had about a one in 8,344 chance per year of dying in a traffic accident from 2003 through 2017. Border Patrol agents had a one in 16,434 annual chance of dying in a car accident from 2003 through 2017. In other words, Border Patrol agents were about half as likely to die in traffic accidents in the line of duty as Americans were in the course of their lives. A better form of this estimate would compare death rates per mile traveled but that information is not available for Border Patrol officers.

Including Rogelio Martinez, five Border Patrol agents have been murdered in the line of duty since 2003, which means their annual chance of being murdered in the line of duty is one in 52,589. More than 238,000 Americans have been murdered since 2003 with a nationwide death rate of one in 19,431 per year. Regular Americans are almost 3 times as likely to be murdered in any year from 2003 through 2017 than Border Patrol agents were.

Border Patrol agents volunteered for a job that routinely places them in danger but that heightened danger does not translate into a higher chance of being murdered or dying in a car accident, when compared to all Americans, or dying in the line of duty, when compared to other law enforcement officers. Border Patrol equipment, training, and support likely explain that. The death and possible murder of Border Patrol agent Rogelio Martinez is a tragedy but one that is thankfully rare.

Table 2

Border Patrol Agents, Cause of Death, and Year of Death, 2003-2017

Name Year Cause of Death Rogelio Martinez 2017 Assault/Murder Isaac Morales 2017 Assault/Murder David Gomez 2016 Accident (health) Manuel A. Alvarez 2016 Car Accident Jose D. Barraza 2016 Car Accident Tyler R. Robledo 2014 Car Accident Javier Vega, Jr. 2014 Assault/Murder Alexander I. Giannini 2014 Car Accident David R. Delaney 2012 Accident (health) Nicholas J. Ivie 2012 Assault/Murder Jeffrey Ramirez 2012 Accident (health) James R. Dominguez 2012 Car Accident Leopoldo Cavazos Jr. 2012 Car Accident Eduardo Rojas Jr. 2011 Car Accident Hector R. Clark 2011 Car Accident Brian A. Terry 2010 Assault/Murder Michael V. Gallagher 2010 Car Accident Mark F. Van Doren 2010 Car Accident Robert W. Rosas Jr. 2009 Assault/Murder Cruz C. McGuire 2009 Accident (health) Nathaniel A. Afolayan 2009 Accident (health) Jarod C. Dittman 2008 Car Accident Luis Aguilar 2008 Assault/Murder Eric Cabral 2007 Accident (health) Richard Goldstein 2007 Accident (drowning) David J. Tourscher 2007 Car Accident Ramon Nevarez Jr. 2007 Car Accident David N. Webb 2006 Car Accident Nicholas D. Greenig 2006 Car Accident George B. DeBates 2004 Car Accident Travis W. Attaway 2004 Accident (drowning) Jeremy M. Wilson 2004 Accident (drowning) James P. Epling 2003 Accident (drowning)

Source: Customs and Border Protection.

Barely two months in the post, “Lexington,” the new America columnist at the estimable British weekly, The Economist, has taken after America’s newest Supreme Court justice, Neil Gorsuch, over the speech Gorsuch gave 11 days ago before some 2,300 Federalist Society members and friends at the society’s 35th annual convention. Lexington’s title, “Conservative lawyers are among the president’s biggest enablers,” well captures its theme: “A movement dedicated to defending the constitution” will come to regret that it has enabled “a president who cares not a whit for legal philosophy.”

Yet the slim evidence Lexington adduces, outlined below, goes quite the other way. If “the reason many worry about the Federalist Society” is “because its influence is vast, brazen and part of a wider politicising of the last branch of American democracy to succumb to partisanship,” as Lexington avers, that worry is seriously misplaced. Indeed, the Federalist Society was created precisely to check the politicization of the courts that has gone on now for eight decades.

Ironically, toward the end of the piece, Lexington implicitly concedes that long-running politicization by citing Yale’s Bruce Ackerman as worrying that “the federal courts are heading for a period of ‘hyper-politicisation’ not seen since the 1930s.” That’s when Franklin Roosevelt threatened to pack the Supreme Court with six new members after it found several of his New Deal schemes to be unconstitutional. But at the same time, Lexington characterizes the founding of the Federalist Society in 1982 as “a riposte to the legal profession’s liberal mainstream”—as if that “mainstream” were not itself deeply politicized.

It was. Liberal New Deal justices had eviscerated constitutional limits on congressional power, reduced property rights and economic liberty to second-class status, and opened the floodgates to the modern administrative state where most law today is written. Then in the 1950s, liberal judges began finding rights nowhere to be found even among our unenumerated rights. That created a conservative backlash, but conservative justices addressed only the last of those problems, and unevenly at that. Their main concern was to oppose what they saw as liberal judicial “activism” with conservative judicial “restraint” and “deference” to the political branches. Inordinately concerned with judicial behavior—as opposed to the text itself—both sides were too often guilty of reading constitutional and statutory text through their respective political prisms.

It remained for later conservatives and especially libertarians to shift the focus from judicial behavior to legal texts in their full reaches and as originally understood—and nowhere has that evolution been more cultivated and apparent than through the many public programs of the Federalist Society. Yet Lexington takes exception to Justice Gorsuch’s “triumphalist tone”—as if the restoration of originalism and textualism were not to be celebrated—while criticizing the Federalist Society’s “project” of staffing the courts with originalist judges. “It represents an assault on the courts’ already-tested consensual traditions,” Lexington writes, for “the federal courts look stronger for including a range of legal philosophies.”

Think about that. The courts look stronger if they include judges who read the text as it was meant to be read and judges who read it as it was not meant to be read. Thus the courts’ “already-tested consensual traditions.”

Lexington’s one substantive point, drawn from Justice Gorsuch’s speech, concerns the justice’s “boldness” in signaling a legal agenda he means to pursue. He would curtail “the federal bureaucracy’s power to interpret statutes,” Lexington writes, unlike Antonin Scalia, the originalist he succeeded, who “deferred to the executive.” And that augurs “a more activist approach.”

But isn’t that precisely what Lexington wants—an “engaged” justice to check a run-amok president (and legislature), a justice who reads the law as written, not through a political lens? It’s not true that Justice Scalia “deferred to the executive” in any wholesale way. It is true, however, that our courts, in the main, have long been entirely too deferential to the statutory interpretations of administrative agencies.

Before he died, Justice Scalia had begun rethinking some of his earlier administrative law positions. If Justice Gorsuch follows that path, along with others President Trump may put on the bench from among those the Federalist Society has brought to his notice, Lexington need have no worry about America’s politicized courts. They will check this president and others who follow if they try to rule “by pen and phone.” And they will be on their way to recovery as the non-political branch the Framers intended them to be. 

At Saturday’s Halifax International Security Forum, Eric Schmidt announced that Google will alter its search algorithm to “de-rank” results from Russia Today.

Why did Google do this? Perhaps they were concerned about Russia meddling in American elections or they thought their customers wished to see less of Russia Today. It matters not. Generally Google has broad power to police its platform. We might not like the decision, but it is not ours to make.

There is a second possibility. Government officials may have threatened Google to bring about this “de-ranking” of Russia Today. If so, the First Amendment poses questions for us. We need answer such questions, however, only if government officials did in fact threaten Google.

Consider the following exchange between Sen. Feinstein and Google General Counsel Kent Walker from the Senate Intelligence Committee hearings on Russian influence in the 2016 election:

Feinstein: Why didn’t Google take any action regarding RT after the intelligence community assessment came out in January of 2017.

Walker: … with regard to RT, we recognize the concerns that have been expressed about RT and concerns about its slanted coverage, this is of course a question that goes beyond the internet, RT is covered, its channel is on major cable television stations, on satellite television stations, its advertising appears in newspapers, magazines, airports, it’s run in hotels in pretty much every city in the United States. We have carefully reviewed the content of RT to see that it complies with the policies that we have against hate speech, incitement to violence, etc., so far we have not found violations, but we continue to look, beyond that, we think that the key to this area is transparency, that Americans should have access to information from a wide variety of perspectives, but they should know what they’re getting, so we already on Google provide information about the government funded nature of RT, we’re looking on ways to expand that to YouTube and potentially other platforms.

Feinstein: Well, I’m really not satisfied with that, that’s sort of been the trend of the testimony all along, I think we’re in a different day now, we’re at the beginning of what could be cyberwar, and you all, as a policy matter, have to really take a look at that and what role you play.

Sen. Feinstein rather boldly asked Google to take action to hamper RT’s ability to communicate its views to American audiences. This demand came after an opening speech in which she proclaimed that internet platforms must “do something about it, or we will.” It is difficult to interpret this as anything but a threat. If Google, and others, failed to quash RT using tools constitutionally prohibited to Congress, Congress would yoke them with onerous regulation. Google has now responded with the de-ranking. Of course, we lack a document where Mr. Schmidt says something like “to avoid unspecified harms by Congress, Google must change our search function.” But the context and timing of the hearings and the de-ranking make a persuasive case for coercion.

Congress is wrong in three ways. First, freedom of speech. RT America, despite being recently required to register itself as a foreign agent, enjoys First Amendment speech rights in the same way that other foreign, state-funded television channels like Al Jazeera and BBC America do. Congress does not have the power to prevent them from running advertisements, or publishing news stories on their website.

Second, editorial control. Courts have ruled that the First Amendment protects a search engine’s ordering of results. A search algorithm may be “meta” editorial control, but it nonetheless is and should be protected from government threats. By analogy, consider The Wall Street Journal. Should we allow public officials to “persuade” editors to move a story from the front page to page 11?

Third, the interests of listeners/readers. Courts have permitted the government to license broadcasters and to impose fairness obligations as a condition for a license. Such impositions are said to be in the interest of listeners or viewers given the scarcity of the broadcast spectrum. But opportunities to speak on the Internet are not scarce or limited by nature. In any case, Google users do not need public officials to decide what they read about and when. That idea runs directly counter to freedom of speech.   

In responding favorably to this bullying cry for censorship, Google appears to have politicized its platform in order to forestall political punishment. This is strikingly similar to the EU’s use of threatened regulation to force social media firms to adopt its definitions of hate speech, applying them globally through terms-of-service changes. Both episodes represent end-runs around the First Amendment, in which private firms are pushed to enforce policies that governments would be otherwise prohibited from instantiating. The recent Google example, however, is far more troubling, as it is the American government, supposedly fully bound by the First Amendment, which has resorted to bald threats in order to circumvent the rights-protections guaranteed to its citizens.

HT: Will Duffield for research assistance and Matthew Feeney the Wall Street Journal example.

The rising level of deaths from opioid overdoses is getting a lot of attention, including from a Nobel laureate economist and the White House. In the rush to find a solution to the problem of opioids, I hope we don’t forget the problem that opioids were intended to cure: chronic severe pain. Living with that kind of pain is awful, and it’s wonderful that science has found ways to help people in pain.

But that’s not the way President Trump’s surgeon general sees it. In an NPR interview this week, Dr. Jerome Adams had this to say:

NPR’s Elise Hu: Much of this crisis started in doctors’ offices. We’ve heard statistics like doctors in the United States prescribe four times the number of pills per person that doctors in the United Kingdom do, for example. What do you think is encouraging doctors to prescribe at those levels?

Dr. Adams: Well, I can tell you, as one of those doctors, that many of my colleagues tell me they feel pressured to prescribe. You have patients who expect an opioid is the only or main way to treat their pain. But I would take issue with one thing you said—I don’t think it started in the doctors’ offices. I think it starts before that. I think that it starts with this expectation that everyone’s going to have no pain, with the idea that a pill can solve everything. And we need to help folks understand there’s a real danger to feeling like we can medicate our way out of any and all problems. 

(Note: that statement appears at about 4:25 in the audio, but not in the related transcript.)

Of course no one should feel that we can “medicate our way out of any and all problems.” But we can relieve some pain. And I am disappointed to hear the surgeon general say that we should get over our attitude that doctors can help to alleviate our pain.

In a 2005 Cato study, Ronald T. Libby argued that opioid therapies for pain had proved successful, but because of criticism and law enforcement efforts “many physicians and pain specialists have shied away from opioid treatment, causing millions of Americans to suffer from chronic pain even as therapies were available to treat it.”

In a recent article, surgeon and Cato senior fellow Jeffrey Singer argues that crackdowns on opioid prescription and the resulting decline in prescriptions are driving more patients to the black market, while “opioid abuse and overdose rates have declined by 25 percent in states where marijuana has been made legally available.”

There are going to be plenty of arguments about the best policy to deal with opioid abuse. But let’s start with the premise that the alleviation of pain is a great thing.

Imagine a friend approaches you with an opportunity for what he believes will be easy money: a guy he met knows where some local drug dealers store their merchandise—a great big pile of it, fifty kilos, lightly guarded. Your friend’s guy thinks it could be grabbed relatively easily and flipped for a hefty profit. The whole thing sounds sketchy to you, but cash is tight this month and stealing from drug dealers does not feel like the most morally objectionable of crimes. Perhaps not the most sophisticated sort (and having watched a bit too much TV), you soon find yourself in a van on your way to the score.

Except there was no score—it never existed—and your friend’s “guy” is actually a police officer, whose colleagues arrive and arrest you and charge you with conspiracy to traffic in a controlled substance (the mythical fifty kilos) while carrying a firearm (your friend brought one along). Never mind that the drugs you are being punished for trafficking are make-believe—as is the place from which you were to steal them—you now face fifteen years in prison for indulging a yarn spun by the government.

These are, with some simplification, the facts of United States v. Conley, handed down last week by the United States Court of Appeals for the Seventh Circuit. The Seventh Circuit felt itself bound to uphold the conviction, but not without first referring to the practice as “tawdry” and “question[ing] the wisdom and purpose of expending the level of law enforcement resources and judicial time and effort in this prosecution.” The Court of Appeals quoted from the trial judge’s opinion in the same case, who declared Conley’s fifteen-year sentence for an imaginary crime “devoid of true fairness … serv[ing] no real purpose other than to destroy any vestiges of respect in our legal system and law enforcement that this defendant and his community may have had.” The trial judge was required however to impose it due to mandatory minimum sentences set by Congress.

The Seventh Circuit is not the first to encounter this practice of prosecuting hypothetical criminals for crimes of the government’s concoction, nor the first to express its displeasure. The opinion itself cites eight other opinions from around the country that take a dim view of this gimmick. One judge on the Sixth Circuit declared “the concept of these ‘stash house sting’ operations [is] at odds with the pride we take in presenting American criminal justice as a system that treats defendants fairly and equally under the law.” Another, on the Third Circuit, argued “the potential for abuse and mischief [here] is endemic.” Yet, in case after case, courts give the thumbs up.

Perhaps the most disturbing aspect of this ploy is that it empowers the government to define the crime it is inventing. Since drug sentences are tied to the weight of the drugs at issue, the officers can inflate the sentence by inflating the imaginary bag of drugs. Since they made up a stash house with fifty kilos, Conley was charged with fifty kilos, if they’d said two kilos, or three hundred, or one million, the sentence would have been different—criminal justice as magical realism.

The average reader may well wonder why this could be considered anything other than entrapment. After all, if the government told you to commit a crime it should not have the gall to demand you be punished for it—a sort of inverse of the traditional definition of chutzpah, where the man who killed his parents asks the court to take mercy on him as an orphan. But the entrapment defense is very narrow, creating results that would be farcical if they were not so tragic. Take Conley: because the government agent didn’t go to Conley directly but to his eventual associates who in turn recruited Conley, the Seventh Circuit held Conley couldn’t claim entrapment. The government may therefore concoct a conspiracy and induce one party to carry it out, who then recruits third parties to help him. After all are arrested, the primary party, who has a potential claim of entrapment, is given a reduced sentence for testifying (as happened in this case), and they throw the book at whoever else was ensnared.

And to what end? Advocates of more aggressive criminal law enforcement warn we are experiencing a new crime wave; the claim seems dubious, but even if it is true, the mind boggles as to how it improves matters to let the government make up nonexistent crimes to punish. Surely those resources could be better focused toward those pursuing violent ends on their own initiative? Given the bulging seams of our current prison capacity, what good does it do to shackle unsuspecting rubes with decades-long sentences of the government’s manufacture? Yet the mischief will continue until courts stop simply gritting their teeth and start showing some judicial grit.

The current tax reform debate has focused on economic growth and the value of cuts to different groups of taxpayers. Tax simplification has received less attention, and Republican bills would only make modest gains in that regard.

Yet a major tax code simplification would not only save time on administration, it would increase financial privacy and deter cyberattacks on the Internal Revenue Service. A new study by Michael Hatfield of the University of Washington looks at the risks posed by the IRS’s vast data collection on 290 million Americans. The more micromanagement there is in the tax code, the more information the IRS collects on our finances, lifestyles, and activities.

Here are some of Hatfield’s points:

  • Many federal agencies have been hit with damaging cyberattacks in recent years, including the Office of Personnel Management, the Justice Department, the Pentagon, and the White House.
  • The IRS has also suffered attacks. In 2015, “the IRS launched the Get Transcript service, enabling taxpayers to view this information (known as a “transcript”) online. Unfortunately, the security of the service was so low that, within the first few months of the service, hackers stole personal information from about 724,000 taxpayer accounts.”
  • Individual hackers, groups, and hostile governments regularly target U.S. companies and government agencies. The Chinese government may be building a database of personal information on all Americans. The IRS is an ideal target for hackers seeking to steal personal information and money, and for people simply wanting to wreak havoc by destroying data.
  • The IRS has a history of expensive technology failures. The agency has struggled to update its systems with new technologies and security protections. The IRS has a hard time competing to attract top computer experts, and so it may fall further behind.
  • The IRS collects information on our income sources, family structure, health information, housing data, small business details, educational situation, retirement finances, and many other things. During investigations and audits, the agency can demand and collect just about any information it wants to judge the accuracy of tax return data.
  • Political pressures work against security and personal privacy. Politicians encourage the IRS to improve “customer service,” which encourages more online interaction. And politicians push to close the “tax gap” of taxes owed but not collected, which prompts the agency to demand ever more data from individuals and businesses.

Hatfield notes that ensuring good cybersecurity is a difficult task for any agency these days, so damaging attacks against the IRS seem likely. Ironically, massive IRS attacks may have been avoided thus far due to the antiquated nature of the IRS’s computer systems.

We need better IRS management, but Hatfield’s main point is that Congress should simplify the tax code to reduce the amount of information collected from Americans. Another problem he points to is the overwithholding of income taxes, which necessitates more than 110 million refunds each year. Hatfield says, “making the IRS less like an ATM would reduce its appeal to financial thieves.” One fix, in my view, would be to repeal the earned income tax credit, which pays $60 billion in subsidies to more than 25 million people a year. The huge EITC error rate illustrates the IRS’s inability to monitor its vast transactions.

Hatfield concludes:

Congress has designed a tax system that requires the IRS to collect information on hundreds of millions of individuals and to routinely issue hundreds of billions in refunds. If the tax law did not require so much information on so many, nor involve refunds to so many, the IRS would be a less appealing and more defensible cyberattack target. In short, if the tax law were simpler in specific ways, the information technology needs at the IRS would be simpler, and adequate cybersecurity for it would be easier.

The tax law need not demand so much information on so many individuals, nor must its administration turn on a system that generates refunds as a rule rather than an exception. Within the limits of the political and financial realities that determine legislation, there is ample flexibility for Congress to reform tax law so that it demands less of both taxpayers and tax administrators and, thereby, provides more information security.

A broader conclusion from Hatfield’s study is that policymakers should put much more emphasis on personal privacy when considering all federal programs. The government has a poor record of guarding its databases, so policymakers should be very skeptical of federal activities that require the gathering of personal data on Americans.

Republicans are letting themselves get cornered by slanted tables appearing to show the rich gaining the most from tax reform. The official distribution tables are not presenting GOP tax changes in a fair manner, as I discuss here.

But Fox News also? On Sunday, Fox displayed this image, and host Eric Shawn asked Rep. Jim Renacci, “…if you make more than a million, if you’re a richie, you save $21,000 … Do you think that’s fair and appropriate?” Renacci did not do a good job of pushing back.

Presenting dollar cuts with no context is meaningless. The proper context is how much people currently pay in the taxes being cut. (The Fox figures are way off, by the way).

The table below shows Tax Policy Center estimates of average cuts for the Senate bill in 2019, and my estimates of the average amount paid in individual income, corporate income, and estate taxes under current law. The “richies” might get the largest dollar cuts, but they will pay an enormous pile of taxes in 2019 with or without the GOP reforms.

The $30-$40K group won’t pay any federal income taxes under current law, on average. So the “cut” that group receives would stem from more people not paying anything and from an increase in refundable credits, which are spending subsidies.

The middle groups would receive far larger percentage tax cuts than the top group.

Figures in the table are based on TPC data in T17-0268 detail table and T17-0043 and my estimates.

Republicans cannot tell Fox News what to report, of course, but they can require that its own Joint Committee on Taxation show more detail in its tables and to present tax changes in a fair context, which it is not doing. In turn, that may encourage news outlets to present the GOP plan in a more accurate and neutral light.

“A man’s home is his castle”—this is not just an aphorism, but a longstanding legal principle. From Biblical times through to the English common law, the home was recognized as a place of refuge in which the owner is protected against uninvited private parties and unjustified government intrusion. That legal shield against arbitrary invasions of the home was embodied in the Fourth Amendment, which resulted in large measure from Americans’ reaction to the British authorities’ use of general warrants to search colonists’ homes without individualized suspicion. As a result of this history, “when it comes to the Fourth Amendment, the home is first among equals.” Florida v. Jardines, 569 U.S. 1, 6 (2013). 

In Collins v. Virginia, the issue before the Supreme Court is whether a police officer, uninvited and without a warrant, may enter private property, approach a home, and search a vehicle parked just a few feet from the house. Cato has filed a brief arguing that permitting such a practice would be squarely inconsistent with the Fourth Amendment’s special solicitude for the privacy of the home. At common law, and under the Fourth Amendment, the protection accorded to the home extends to its surrounding grounds and out-buildings—the so-called “curtilage.” Because this area is closely tied to the home, both physically and psychologically, the curtilage is regarded as part of the home itself for Fourth Amendment purposes. This protection is the foremost example of the Fourth Amendment’s general defense against unreasonable government intrusion into private property. Indeed, the text of the Amendment protects “[t]he right of the people to be secure,” not just in their “persons,” but also in their “houses, papers, and effects.”

The general justification for allowing warrantless searches of vehicles is the reduced expectation of privacy in vehicles as they travel on public roads. But there is no such reduced expectation when a vehicle is parked at home. To the contrary, expectations of privacy are at their zenith at the home and its surroundings. And to the extent that the warrantless search of automobiles is justified by their mobility, a vehicle parked at home is immobile; if it leaves the home and curtilage it would become subject to search. Moreover, the existing doctrine on “exigent circumstances” assures that the warrant requirement will not undermine critical law enforcement needs. For example, the need to prevent physical harm or the imminent destruction of evidence would allow officers to intrude on the curtilage without a warrant.  But in the absence of those circumstances, the Fourth Amendment’s warrant requirement protects Americans’ most private refuge against the abusive use of government power.

As the 5th round of talks on the renegotiation of the North American Free Trade Agreement (NAFTA) wrap up, the United States, Canada and Mexico continue to work out the technical details of their various proposals. Being the week of Thanksgiving, and because I can’t stop thinking about all the great food I’m going to eat on Thursday, I figured it’s a perfect time to reflect on the benefits of NAFTA for Americans who buy a variety of agricultural products.

When you look at products imported for consumption, that is, products that receive no additional processing, imports from Canada and Mexico have grown quite a bit since NAFTA was finalized in 1994. Imports are important because they allow consumers greater choice (like fruits and vegetables in winter!), as well as lowering prices so that everyone gets to enjoy some Thanksgiving cheer.

The top consumer oriented imports from Mexico in 2016 were fresh vegetables ($5.6 billion), other fresh fruit ($4.9 billion), and wine and beer ($3.1 billion). From Canada, the top imports were snack foods ($4 billion), other consumer oriented products ($2.6 billion), red meats ($2.2 billion), and processed fruits and vegetables ($1.4 billion). In addition, in 2016, the U.S. imported $25.8 million in live turkeys in 2016, with 99.9% coming from Canada (the rest are from France).  

The United States exports a lot to Canada and Mexico as well, which rank 2nd and 3rd for U.S. agricultural exports (China is the top export partner). U.S. agricultural exports to its NAFTA partners grew from $8.7 billion in 1992 to $38.1 billion in 2016, while imports grew from $6.5 billion to $44.5 billion. It is no wonder, therefore, that U.S. farmers and others in the agricultural industry greatly support NAFTA, and don’t want to see it scrapped.

A 2015 report by the USDA on the 20th anniversary of NAFTA, it concluded:

The 20th anniversary of NAFTA provides testimony to the lasting value of agricultural trade liberalization to the North American economy. By removing thousands of tariffs, quotas, import licensing requirements, and other policy measures that formerly distorted agricultural trade and FDI among the United States, Canada, and Mexico, NAFTA facilitated a large increase in cross-border economic activity in the agricultural and processed food sectors.

So this Thanksgiving, I’m thankful for NAFTA, and I hope the United States, Canada and Mexico do what they can to prevent any rollbacks to the liberalization we have achieved, and work to modernize it so that we continue to benefit from this great deal.

On Friday, the Treasury Department released a report on Financial Stability Oversight Council (FSOC) designations. This report could have addressed the problem underlying FSOC’s designation authority: the fact that it makes explicit which financial institutions are “too big to fail,” paving the way for more bailouts of the kind we saw in 2008. Sadly, the report flew wide of the mark, focusing on the minutiae of the designation process and all but ignoring the glaring bailout problem.

A little background. FSOC is an entity created in 2010 by Dodd-Frank, the sweeping financial reform legislation passed in the wake of the 2008 crisis. It is comprised of the heads of various financial regulatory agencies, and is chaired by the Secretary of the Treasury. One of its purposes is to facilitate communication among regulators, helping to give them a complete picture of the financial sector beyond their own territories.

FSOC’s other purpose — and arguably its primary one — is to identify systemic risk and designate certain entities as “systemically important financial institutions” (SIFIs). These SIFIs are then subject to heightened oversight by the Federal Reserve. The idea is that increased oversight will reduce the chances of these companies running into trouble, and thereby obviate the need for bailouts.

But the government has not shown itself to be adept at identifying systemic risk. Not in 2008. Not in any of the last eight financial crises, in fact. Even if the coordination among regulators facilitated by FSOC improves the government’s ability to see trouble brewing, it will never have perfect foresight.

When one of these SIFIs stumbles — as one eventually will given a long enough timeline — how will the government avoid a bailout? In the past, large firms understood with a wink and a nod that Uncle Sam was backstopping their bets. But there was at least some ambiguity. And in the case of Lehman Brothers, the government did not ultimately provide a safe landing. Could the government have let Lehman fail if it had already branded it “systemically important”? I tend to think not.

Back to the new report. The report does a good job of identifying the problems that FSOC and SIFI designation present:

Designation by the Council…should not imply that the government will rescue the designated firm in the event of failure. A market expectation of such a rescue could cause inefficient investment decisions and increased risk-taking. Nor should government discipline be substituted for the market discipline of investors, counterparties, and clients as a result of the designation process.

I couldn’t have said it better myself. Later, the report states that “a company subject to a Council designation should not receive a financial advantage from any perception that the government may rescue the company in the event of its failure. Such a perception could give the company an unfair competitive advantage in funding markets.” Again, an excellent point.

Unfortunately, while the report accurately identifies and describes some of the chief problems with the SIFI designation, it completely fails to identify any corresponding solutions.

As to market advantage, the report does note off-hand that the market may adjust once the regulatory effects become more clear. But this explanation makes little sense. Yes, the market will incorporate the advantage by pricing in the government backstop a company is likely to enjoy as a SIFI.  That doesn’t mean the effect goes away. Instead, the company’s position in the market will reflect the costs and benefits of being designated as systemically important.

As for the SIFI designation potentially forcing the government’s hand if a bailout is needed, the report simply outlines the problem and then never mentions it again. The report does recommend using an activities-based approach to regulation, although it is not clear whether this is intended to address the specific problem of creating moral hazard by designating certain firms as SIFIs. In the past, while FSOC has been troublingly opaque about its designation process, it does seem that firm size has been the most salient concern. The report notes that focusing on activities may reduce the need to designate firms, principally because firms will likely avoid activities that might trigger designation. And yet, the report provides little detail about how FSOC will evaluate activities to determine their systemic risk or even what “risk” in this context means. Given the government’s poor track record in predicting crises, it would be helpful if the report had explained how FSOC’s analysis will be different going forward. Finally, the report suggests using designations sparingly.  But as long as any firm is designated as a SIFI, the specter of government bailouts remains.

It is worth noting that the report is written as a memorandum to President Trump, and comes in response to a request he sent to the Treasury Department earlier in the year. In that request, the president asks Treasury principally about the process for designating SIFIs — a process that has left much to be desired.

But the request also asks Treasury to assess whether FSOC and its SIFI designations align with the objectives laid out in the president’s February executive order establishing principles for financial regulation. Among those objectives is to “prevent taxpayer funded bailouts.” Addressing whether SIFI designation itself might lead to more bailouts was squarely within the four corners of the president’s request. And Treasury completely ignored it.

The report makes some worthwhile suggestions about the designation process. For example, it recommends that FSOC notify companies earlier in the process, so that they can take measures to address FSOC’s concerns and avoid SIFI designation entirely. Of course, to assume that addressing these issues is a good thing is to assume that FSOC has correctly identified problems in the first place. And that may be a big assumption.

The report also recommends that FSOC “should only designate a company if the expected benefits to financial stability from Federal Reserve supervision and enhanced prudential standards outweigh the costs that designation would impose.” This is a good recommendation but I’m fairly disheartened that it’s one that must be made. Should regulators need to be told that they should only act to make things better, not worse?

Then there’s the report’s recommendation that, before making a SIFI designation, FSOC should consider whether a company is actually likely to face financial trouble: “Material financial distress at a nonbank financial company does not pose a threat to U.S. financial stability if the company will not experience material financial distress.” Indeed: things that do not happen rarely cause distress. Again, I’m disappointed this must be explicitly stated.

It might be less distressing if the report had simply failed to mention the risk that SIFI designation could give rise to more bailouts. That would leave open the possibility that alerting Treasury staff to this danger would spur some action. Instead, the report’s authors, aware of the risk and explicitly stating the need to address it, offer no explanation of how that might be accomplished.

The bottom line is that there is probably no way for the government to designate firms as “systemically important” without simultaneously creating the guarantee of bailouts later on, should the need arise. As has been argued in several other places, the SIFI designation process not only fails in Dodd-Frank’s stated mission of ending “too big to fail,” but explicitly enshrines it in law.

The FSOC itself need not be disbanded, but if we’re serious about eliminating taxpayer-funded bailouts — and I hope we are — its power to name SIFIs should end.

[Cross-posted from]