ADVERTISEMENT

the truth about climate change

The agenda, then, is politically expedient? I've been here a long time and Joe and FSUreed are the smartest people on this subject by far. Why won't you listen to them? What motive do they have? Are they running for office? Or are they telling you, quite simply, that as a species we are ruining the planet, and must change? That's the explain like I'm five part. Why do you refuse to listen to people who have spent their lives studying it? If a guy comes over and fixes your computer, would you argue about processors? No, because you have no idea how it works, but have a lot of ideas about it should work. I don't understand that mentality and probably never will. You specifically seem like the "I got mine, screw everyone else" crowd. Reexamine your religion and morality. I'm an atheist and volunteer for animals in need. I don't get paid for it. I do it because I have the time and they need the help. According to you, they should just die.
In my case, it isn't that I'm not listening to them; it's that they're talking about one thing and I'm trying to talk about another. This has happened before. It is very frustrating to me, and I'm sure it is frustrating to them, as well, if they truly don't realize we're talking about separate issues.

At the risk of further alienating any number of people, I have to add one observation. I cannot help but be struck by the similarities between this conversation and those involving conspiracy theories, specifically the 911 attacks. Just as I do not have the knowledge to understand the nuances of climate science, neither am I equipped to understand complex physics. So when a conspiracy theorist or AGW disciple does a document dump with graphs, tables, etc., I have absolutely no way of judging the credibility/accuracy of the data.

This is why I limit -- or try to limit -- my participation in these discussions to aspects where I have some knowledge or experience. In the case of AGW, that is the logic and consistency or the tactics and statements of the proponents.

One example: The effect of AGW on hurricanes. It's been awhile, but my understanding is that the acknowledged expert in the field, in charge of the research for the IPCC, was Chris Landsea. He found no relationship. When his research was presented to the public as 180 degrees from what he found, he wrote a "dear colleague" letter complaining bitterly about the misrepresentation. When I raise this point, the AGW proponents come back with an argument about the effect (or lack of same) of AGW on hurricanes. But that is not my point. My point is the way the information is manipulated.
 
Does "model response error" mean "incorrect assumptions used in designing the model"?

I manage a team of 16 people that performs analytics, includes data scientists, and builds predictive analytics models using tools like SAS. My background is in Aerospace Engineering, not Climate Science, but I understand the scientific method, can understand the general climate principles, and have an appreciation for the many different disciplines in play here (statistics; climate; geology; etc).

Given that background, but without seeing the actual models, my sense is that:

1. The models have tons of assumptions in them, and quite frankly I've yet to see a sensitivity analysis (Monte Carlo, et al) done to rank order the assumptions from most sensitive (likely the Feedback Multiplier, as discussed above, but also aerosols and so forth) to least sensitive so we can focus our attention on those at the top. A specific answer to your question can be found at this link, and says "The discrepancy between simulated and observed trends...between 1998 and 2012...can be explained in part by....a tendency for the models to simulate a stronger response to C02 than is consistent with observations", which is the direct response to C02 increases, but it goes on to talk about the Feedback Multiplier caused by a water vapor assumption as well with "Another possible source of model error is the representation of water vapor in the upper atmosphere". This is exactly what I'm talking about in this thread.

https://books.google.com/books?id=jn4mCAAAQBAJ&pg=PA62&lpg=PA62&dq="model+response+error"&source=bl&ots=-UyLeir9Bc&sig=y1gez3OO7MLosKrIVDktRA7EDE0&hl=en&sa=X&ei=_7yKVZ6EJoXzsAXa_JqABQ&ved=0CEkQ6AEwBw#v=onepage&q="model response error"&f=false

2. The scientific method requires you to make a prediction, and then measure to see if it came true in order to corroborate your scientific theory. The climate models are the prediction, but they have failed to accurately predict the last 15 years with reasonable enough accuracy, and therefore something is wrong with the theory/hypothesis - and the most likely culprit is the Feedbacks imo. Again - most definitely not settled science.

3. For all the assumed complexity of the climate models, the results of a few of the key models used by the IPCC can be replicated using very simple equations, indicating they are HEAVILY biased towards C02 and do not include much else that matters (like nature). I've found this type of "black box" analysis very interesting:

http://wattsupwiththat.com/2011/01/17/zero-point-three-times-the-forcing/
 
Last edited:

  • I literally explained this particular reference above already. In detail.
  • I also linked actual sea ice data, and that we do not yet have any information on the 2015 or 2016 Sept/Oct summer nadir yet. The level is currently on-track to match or beat the 2012 record low. The Top Ten lowest summer coverages are ALL in the 21st century.
  • Also note that the actual reference above puts the timeframe at 2016±3 years. There ARE error bars, and most quotes have referred to 'as soon as'; all are WITHIN that error bar range, and we will not be able to determine if this is correct or not until late summer 2019, 4-1/2 more years.
  • I also pointed out that 'no ice cover' is specific to the summer minimum, NOT 'no more ice ever' in the Arctic
  • I also pointed out that 'no ice' means approximately 10-15% sea ice coverage, which is on the order of 1 million km^2 to 1.3 million km^2, when the seasonal average minimum from the 1980s to 2010 is around 6.3 million km^2.

Did you go to the NSIDC link to review any of the data for yourself, or are you just continuing the Gish Gallop of blogger information and individual soundbites?

How many times do I need to repeat myself: go to the information sources, the papers and publications that the information is coming from, NOT the snippets and soundbites from blogs and media outlets. They (unmoderated blogs and media) generally do a HORRIBLE job of conveying the information accurately, often going for exaggerated, sensationalized headlines over accurate depictions.
 
I manage a team of 16 people that performs analytics, includes data scientists, and builds predictive analytics models using tools like SAS. My background is in Aerospace Engineering, not Climate Science, but I understand the scientific method, can understand the general climate principles, and have an appreciation for the many different disciplines in play here (statistics; climate; geology; etc).

Given that background, but without seeing the actual models, my sense is that:

1. The models have tons of assumptions in them, and quite frankly I've yet to see a sensitivity analysis (Monte Carlo, et al) done to rank order the assumptions from most sensitive (likely the Feedback Multiplier, as discussed above, but also aerosols and so forth) to least sensitive so we can focus our attention on those at the top. A specific answer to your question can be found at this link, and says "The discrepancy between simulated and observed trends...between 1998 and 2012...can be explained in part by....a tendency for the models to simulate a stronger response to C02 than is consistent with observations", which is the direct response to C02 increases, but it goes on to talk about the Feedback Multiplier caused by a water vapor assumption as well with "Another possible source of model error is the representation of water vapor in the upper atmosphere". This is exactly what I'm talking about in this thread.

https://books.google.com/books?id=jn4mCAAAQBAJ&pg=PA62&lpg=PA62&dq="model+response+error"&source=bl&ots=-UyLeir9Bc&sig=y1gez3OO7MLosKrIVDktRA7EDE0&hl=en&sa=X&ei=_7yKVZ6EJoXzsAXa_JqABQ&ved=0CEkQ6AEwBw#v=onepage&q="model response error"&f=false

2. The scientific method requires you to make a prediction, and then measure to see if it came true in order to corroborate your scientific theory. The climate models are the prediction, but they have failed to accurately predict the last 15 years with reasonable enough accuracy, and therefore something is wrong with the theory/hypothesis - and the most likely culprit is the Feedbacks imo. Again - most definitely not settled science.

3. For all the assumed complexity of the climate models, the results of a few of the key models used by the IPCC can be replicated using very simple equations, indicating they are HEAVILY biased towards C02 and do not include much else that matters (like nature). I've found this type of "black box" analysis very interesting:

http://wattsupwiththat.com/2011/01/17/zero-point-three-times-the-forcing/

FYI - they TEST their models by running them retrospectively against KNOWN inputs from 1900 on to compare to the actual temperature record.

That is how they are 'validated' and various sensitivities/feedback are adjusted. When a model doesn't track well with the past 125 years of actual data, it is adjusted.

Things like major volcanic events provide the opportunity to test/challenge many of those assumptions.

If scientists like Judy Curry and Roy Spencer disagree with those models, they are fully capable of producing their own, and validating them against the historical record. NONE of the 2-3% crowd of climatologists have been able to do that.

You do not advance science by nitpicking on someone else's results, you do it by building on theirs, making your own model, and showing that it works as well or better. If they are SO concerned about the existing models, WHERE ARE THEIR COUNTERPARTS? Show me the data, and I'll happily agree with those results. As of now they do not exist.
 
  • Like
Reactions: cigaretteman
You can't say it's a good model just because the hindcast is accurate. You can make any model fit historical data - that doesn't make it a good model. I think that is where people are fooling themselves. The proof is in how the model does with predictions in the future.

Also, you most definitely can argue that the models are faulty without producing one of your own. That's the whole point - NOBODY has yet created a highly accurate model. But people are still using models to make policy decisions. If this was a business, and we were making expensive business decisions (like acquisitions) with models performing this poorly, people would get fired.

Again - just matching historical data by adjusting assumptions like aerosols et al does not make it a good predictive model. That's just goofy.
 
Again - just matching historical data by adjusting assumptions like aerosols et al does not make it a good predictive model. That's just goofy.

FYI- they are not simply 'adjusting assumptions' in the historic model runs; they are updating observations vs. their ASSUMED/PROJECTED values for those 'random' forcings, which is ALSO done and updated for FORWARD model predictions.

No one can be certain when you run a model forward what the 'actual' natural (or human) inputs will be (unless you are running vs. historical, and the models DO perform well in those circumstances) , so they add in some random variations to those as best guesses for the ranges, then run ALL of the model permutations of those inputs to get an 'average' and standard deviation of the traces.

This would include things like:
  • Actual aerosol concentrations from emissions vs. best guesses
  • Pacific Decadal Oscillation values (El Nino/La Nina) vs. best guess, or a random generator is used, as these decadal cycles are not predictable years ahead
  • Actual atmospheric CO2 levels vs. projections or estimates
  • Actual solar TSI (sun output) vs. projections
One of the more recent papers (link below) has updated model runs and input the ACTUAL PDO (El Nino/La Nina) variations, which are assumed as random in the models when they are run. However, we have had a recent 'run' of more La Nina events in the past 5-10 years than is typical; when they run updated models WITH those actual events, the models actually track quite closely to observations - they do not 'overestimate' anymore by much at all.

And if you are so concerned about 'the models' being inaccurate, it goes both ways:
Arctic sea ice models have not predicted an 'ice free' Arctic out to 2050 and even 2100. Observations appear to indicate this COULD happen by 2020, but more likely 2030. That is STILL 50 years AHEAD of what the models are saying.

Same with sea level rise model projections; observations are currently running at the VERY HIGH EDGE of the model ranges predicted.

In other words, 2 of the 3 major modeling efforts are GROSSLY UNDERESTIMATING the impacts vs actual observations.

And the other 1 model type (for surface temperatures) tracks well, when observations for TSI, PDO, etc are input, instead of using forward 'best guesses' for these essentially 'random' variations.

Here are some links you can review regarding updated surface temperature models and PDO/decadal variability and how those factors impact short-term trends, but do not impact the long term trends or accuracy:

http://www.theguardian.com/environm...te-models-accurately-predicted-global-warming
From this paper:
http://www.nature.com/nclimate/journal/v4/n9/full/nclimate2310.html

http://touch.latimes.com/#section/-1/article/p2p-82655823/
From this paper:
http://www.nature.com/nature/journal/v517/n7536/full/nature14117.html

FWIW, Nature is one of the premier science journals, with some of the most rigorous peer review and scientific integrity around...
 
My computer just gave me a prediction. It said I will poop twice today and it is a result of global warming.

11261860_748952185214524_1270051165218748754_n.jpg
 
Last edited:
I understand that they are constantly "tuning" the models (as they should, and good to know they are starting to incorporate natural factors which should have been in there from the beginning), but every time they do it they are updating their hypothesis and the clock restarts on testing a prediction to confirm whether their new hypothesis (model) is accurate enough to drive policy decisions.

A lot of the data used to determine the coefficients for their models is more recent data (because old data becomes more questionable beyond 100 years or so) and so natural factors with longer cycle times will create problems for modelers. Initially they assumed that most of the increase in the late 20th century was due to human caused C02, and that drove the models to predict higher temps than actually occurred so far this century. As they reallocate some of that warming to natural cycles, this should drive the predictions closer to observed values (and drive the feedback multiplier lower).
 
Climate change was invented by Al Gore and his ilk

You follow the previous post with this? It reminds me of Adam Corolla's bit about someone that gets their ass kicked in an argument but finishes w/ "yeah, but still . . . ". Joe and Dan's debate demonstrate this issue is (frankly) too complicated for public consumption.
 
I understand that they are constantly "tuning" the models (as they should, and good to know they are starting to incorporate natural factors which should have been in there from the beginning), but every time they do it they are updating their hypothesis and the clock restarts on testing a prediction to confirm whether their new hypothesis (model) is accurate enough to drive policy decisions.

But understand that there is a DIFFERENCE between adjusting an aerosol 'forcing' constant or factor vs. adjusting the AMOUNT that is observationally seen vs. what was surmised MIGHT be there in 2 year or 5 years, looking forward.

You can have an absolutely PERFECT model for the climate, but if the solar TSI changes, or the aerosol concentration changes, or CO2 levels emitting divert from your expected curve, or the PDO reverts to a long El Nino or La Nina phase, you WILL NOT get an output which matches observations. That is not because 'the model is incorrect'; that is because forward-looking estimations/guesses you had to use as annually (or monthly) adjusted inputs (while the model is running, it uses graphical lookup tables, not fixed values for these inputs). So, when the observed and measured values for those 'input' curves is different than your 'best guess', you update your model the next year with more accurate input curves.

No one is able to predict what all of those inputs will look like, so they use variations on them, ALONG WITH tweaks to forcing/feedback mechanisms (like you guys do with your modeling methods) to generate the large scatterplots of runs you see in things like IPCC reports and papers. And they simply make the 'best guess' that an actual scenario for those uncontrolled variables is close enough to one or more of their model runs to 'encapsulate' reality. That is why you see much of the scatter - it is NOT just due to model 'inaccuracy', it is to try and see how any and all of the scenarios of TSI, PDO, aerosols, CO2 levels make the 'best guess' window.

It is not like these 'variable inputs' are just random numbers, either. They have acceptable and expected ranges they will likely follow.

Now, when you are CHANGING the sensitivities of forcings/feedbacks, THAT is a different issue, and that DOES directly relate to the 'accuracy' and predictability of the models. But it is important to not lump it all into one 'the models are inaccurate' bucket, because they are completely different sources of error, and one is simply not error it is random/natural variability.
 
You follow the previous post with this? It reminds me of Adam Corolla's bit about someone that gets their ass kicked in an argument but finishes w/ "yeah, but still . . . ". Joe and Dan's debate demonstrate this issue is (frankly) too complicated for public consumption.
I can accept that. What I have trouble accepting is that Dan is automatically wrong because 97% of scientists agree with Joe and the science is settled.
 
But understand that there is a DIFFERENCE between adjusting an aerosol 'forcing' constant or factor vs. adjusting the AMOUNT that is observationally seen vs. what was surmised MIGHT be there in 2 year or 5 years, looking forward.

You can have an absolutely PERFECT model for the climate, but if the solar TSI changes, or the aerosol concentration changes, or CO2 levels emitting divert from your expected curve, or the PDO reverts to a long El Nino or La Nina phase, you WILL NOT get an output which matches observations. That is not because 'the model is incorrect'; that is because forward-looking estimations/guesses you had to use as annually (or monthly) adjusted inputs (while the model is running, it uses graphical lookup tables, not fixed values for these inputs). So, when the observed and measured values for those 'input' curves is different than your 'best guess', you update your model the next year with more accurate input curves.

No one is able to predict what all of those inputs will look like, so they use variations on them, ALONG WITH tweaks to forcing/feedback mechanisms (like you guys do with your modeling methods) to generate the large scatterplots of runs you see in things like IPCC reports and papers. And they simply make the 'best guess' that an actual scenario for those uncontrolled variables is close enough to one or more of their model runs to 'encapsulate' reality. That is why you see much of the scatter - it is NOT just due to model 'inaccuracy', it is to try and see how any and all of the scenarios of TSI, PDO, aerosols, CO2 levels make the 'best guess' window.

It is not like these 'variable inputs' are just random numbers, either. They have acceptable and expected ranges they will likely follow.

Now, when you are CHANGING the sensitivities of forcings/feedbacks, THAT is a different issue, and that DOES directly relate to the 'accuracy' and predictability of the models. But it is important to not lump it all into one 'the models are inaccurate' bucket, because they are completely different sources of error, and one is simply not error it is random/natural variability.

A lot of that makes sense Joe and I think we would agree on much of it. They may be running a lot of models just to try different variants, and then adjusting as they go based on observations as they come in, but from a Scientific method standpoint if you want to test a Hypothesis (ie: this is how it works) you have to pick one set of assumptions and ride them out - and if the predictions are not true (ie: not accurate enough to be useful) then you can adjust the hypothesis and try again.

My sense is we agree on WHAT they are doing, but I'm not sure the RESULT of what they are doing is good enough to say we can predict the future accurately enough for policy making. Probably depends on how accurate one feels they need to be, and how strongly one applies Pascal's Wager to the decisions.
 
A lot of that makes sense Joe and I think we would agree on much of it. They may be running a lot of models just to try different variants, and then adjusting as they go based on observations as they come in, but from a Scientific method standpoint if you want to test a Hypothesis (ie: this is how it works) you have to pick one set of assumptions and ride them out - and if the predictions are not true (ie: not accurate enough to be useful) then you can adjust the hypothesis and try again.

Correct, but your 'hypothesis' is based only upon the model construction, components and forcing/feedback 'multipliers' or 'sensitivities' you are checking.

Re-running or tweaking the model to correct for a volcanic event, or TSI shift, or any other naturally occuring factor is not explicitly part of your hypothesis; it is your set of 'running' initial conditions. You don't necessarily 'reset' the model each year it goes awry by adjusting your hypothesis input factors (but if it's performing badly you do), but you DO update that 'initial conditions' lookup table each year, and allow the same model to re-run given the new/updated initial conditions for the latest years. Then, you restart the whole thing from its original start-year (e.g. 1900) and use the updated lookup tables to correct for the inputs you guessed wrong on (or, for inputs that have significant error estimates, you run median/high/low case permutations).

This is why the level of confidence in the outputs has grown over the last decade or so: they can identify which 'forcings' result in the model runs that go off track significantly, and start to rule-out those ranges for feedbacks. Again, a lot of the variation you see in model runs is not necessarily due to changes in the hypotheses; it is to encompass the widest range of possible initial conditions permutations the model needs to have for the 'future years', even though they are making educated guesses at what those inputs/combinations of permutations should be.

Frankly, that they can get these complex models to 'stay sane' over a 50 or 100 year or more timeframe and remain close to observations is really quite impressive.
 
ADVERTISEMENT

Latest posts

ADVERTISEMENT