There have been a lot of discussions about lithium-ion battery life across the internet. However, within these discussions I have found contradictory details about the way power is regulated between battery needs and device needs when a mobile/Android device is fully charged. It's clear that the charging process itself is a "smart" process (multiple stages with different current, overcharge protection, etc.), but what happens when the charging stops and the device is still in use? My understanding is that in these situations most laptops have regulators that divvy AC power between directly powering the device and "topping-off" the battery as it looses charge to ambient factors. Do modern Android devices (and iPhones for that matter) do the same thing?
Consider the following hypothetical situation:
- Device is plugged-in to AC mains with 1 Amp supply
- Battery is charged 100%
- Current ongoing device use draws < 1 Amp
- Battery naturally looses 5% charge in 3 hours when not in use (I have not idea what an actual real word value would be here, but the point is that there is a slow loss even when no power is drawn from the battery)
In this case is the battery bypassed completely as long as the active needs of the device do not exceed 1 Amp, or does the battery remain an active power source regardless of charge state and cable connection status? If the battery is in-fact bypassed, does the device wait for some charge-level threshold (say 95% charge) before touching the battery to minimize accumulating (micro)charge cycles?
It's possible that this is more of an electrical engineering question, and though it has certainly come up in that context (such as here), the related discussion seems either too broad or too specific to laptops.
Update
I've been learning a bit more on the correct terminology (thanks to @beeshyams). I see that the concept of "parasitic load" plays a very important, and detrimental, role in charging efficiency. However, I am led to believe that it is not necessarily a key variable in the fully-charged state described above. Anyway, without getting too deep into semantics, it seems like the key variable would instead be "self-discharge" (the natural 5% loss in my example).
My original thought was that this self-discharge effect would be so small, and lead to extremely infrequent topping-off (micro)charge cycles, that it would have a negligible effect on the cumulative charge cycles of the battery. Therefore, by isolating the active load of the device to AC mains (keeping it plugged in most of the time), instead of drawing from the battery, one could potentially prolong battery life. Even though everyone says "don't do that" they never really say why, and that's really what led to this question.
What I'm now starting to see, and which someone will hopefully confirm, is that any charge cycles (even small and infrequent) are really really bad news when they happen near full charge:
- They happen much faster than one would expect (the effects of self-discharge are increased near full charge).
- They are really detrimental (it looks like a 5% cycle between 95%-100% can lead to several magnitudes more "wear" on the battery than a 5% cycle between, say, 45%-50%.
- They exacerbate the negative effects of heat (heat at high change seems to be much worse for battery wear than the same heat at low charge).
So I suppose it's safe to say that these negative factors, even if only considering self-discharge as a catalyst for cycling, nearly always outweigh the benefit of diverting normal operational power away from the battery?