WattsUp said:
We may be confusing ourselves... there may not be any need to literally keep any running "tallies" of the positive or negative consumption. As michael pointed out, actually doing so would be "complex".
Alternately, the trip meter's could be implemented by simply subtracting the current SOC from some "starting" SOC in order to display energy consumption relative to that point (which could be any point in the SOC). Indeed, the starting SOC is made equal to the current SOC whenever the trip meter is reset.
The total consumption from then on is then simply the difference between those two levels:
consumption = starting SOC - current SOC
Note that this value can be computed at any time -- no tallying required, only "sampling" of the SOC, as I suggested in my previous post. It also automatically accounts for energy gained from regeneration.
...
This all makes a lot of sense to me, and seems to be corroborated by the meter's actual behavior.
I think the trip meter's energy numbers are measured, not calculated from state of charge the way you propose, for the following reasons:
1. Displayed SoC has been observed by me and others to not correlate linearly with displayed energy usage. With a full battery, SoC drops quickly compared to energy use at first, showing 10% used when only 1.5kWh have been used, then 25% at 4.5kWh, 50% at 9.5kwh, and so on. If energy use was calculated from SoC "sampling" rather than measured, wouldn't it correlate directly with SoC?
2. Sampling of SoC seems to be inherently inaccurate. I've never known a rechargeable battery's SoC display to behave very consistently over full range. Laptops, tablets, ps3 remotes, mobile phones, and my car- at times SoC seems to stay full longer than seems possible, and at other times seems to drop like a rock with minimal usage. Yes the display on my FFE behaves way better than those other devices, but it does vary like I said above. However my energy usage never runs way less than would make sense, nor does it ever jump up when I am clearly driving efficiently. It behaves as one would expect if it was being directly measured.
3. Battery's effective usable capacity varies with ambient temperature (near 20kWh at ideal temps over 60F, ~17kWh at freezing), but max and min SoC do not vary with ambient temp. Low temps basically make the battery act like a gas tank with a small leak. So for the meter to translate SoC into used energy, it would have to have a correction factor that accurately reflects the effect of temp on charge/discharge dynamics. There goes the less complexity argument. If it didn't have a correction factor, cold-temperature full battery discharges would record ideal battery capacity at any temp but it doesn't do this. Driving full to empty between 30-45 degrees a few months ago, I ended up using 18 kWh according to meter.
4. As one's battery capacity inevitably decreases, relative SoC will not. The battery will still have "full" 100% and "empty" 0% set points after say 30k miles, when the difference between full and empty is perhaps 18kWh rather than 19.5. In this situation if the trip meter is only calculated from the relative SoC (without a complex correction factor for battery age), it will show more energy used than was truly used. Just like in cold weather, this would result in reported capacity remaining at ideal even though available capacity is actually slipping. The efficiency Wh/mi display would look worse than it really is, and range estimate would reflect both errors canceling out and decrease in proportion to true capacity loss. Unfortunately range estimate is so variable, trends are unreliable. But several folks with mileage over 20k appear to have some capacity loss, however modest.
5. Measuring two-way energy movement isn't all that complex. We all have an electric service meter doing half of this at home, and those who have home solar/wind tied into the grid have a meter doing exactly this. Accounting for inefficiency in energy movement from charger to battery doesn't have to be complex. My layman's understanding of charging inefficiency is that at the battery level, the magnitude is pretty constant, at 90%. The source to charger inefficiency seems to be more variable, but isn't relevant to this issue. Insofar as battery level inefficiency may actually vary, I do not think it would tax a computer much to have an algorithm taking relevant factors into consideration.
Now if as I think Michael suspects the trip meter doesn't account for storage inefficiency, then per EPA's 90% efficiency quote, the trip meter would think it's gaining 10kWh for every 9 kWh it truly gains. Is that enough of an error to really mess things up? Consider that my best full-battery-discharge regen mile count was 10, translating to approx 2.5kWh as what the car "thought" I gained. In this situation my car actually would have gained 2.25 kWh. If the total trip recorded a use of 19.8kWh, I actually would have used 19.55 kWh. That's not a very big error to me- especially considering the trip meter isn't specifically designed to measure available battery capacity and we're just trying to make it do that.