Chris Yelland critical of the regulations


It’s great to hear an authority other than the utilities comment on our situation.
The maximum of 3.68 kW of PV power for a home puzzled me. Now at least I have an idea of how they got to that number.
But even with that figure did they assume full power from each panel in their calculations?
I wonder if there might be a response…

Yes. Feeding back at full power every hour the sun shining optimally, on a cold “cloud effect” days, even some more.

In chatting with CoCT “powers”, over a few years, asking them questions I picked up on “another” forum, they said their equipment is not geared for feedback, being old and all that, needing to upgrade IF the Consti Court backs then (my take) plus the fact that they have to give the same opportunity to every household on a transformer to be able to connect grid-tied AND feed back at full max power, cold “cloud effect” days, they have to curb the feedback per household to be fair to all.

This is not about our individual housies, our domains, kingdoms we preside over.
This is about 1000’s of people wanting to do what we all do, and be fair to each and every one of the 1000’s.

Methinks Chris Yelland is over-simplifying it for the masses.

If a bunch of engineers debates the issue and then agrees with him, after they have argued the shiite out of each other, then I will agree with Chris Yelland. :slight_smile:


You’ve sort of got it.
It’s not that the relays are outdated, it is that the application has changed.
Distribution circuits are generally radial circuits probably up to about 33kV ish in ZA.
Protection relays down at distribution voltages are current-based overcurrent and earth fault protection. It is graded (or cascaded some would say) so that for a fault way down the end of the line you would expect the least fault current at the fastest trip time.

Just like in your house your main MCB will be a higher current rating than a plug circuit because you don’t want a plug fault to trip your entire house. Equally, the protection philosophy at distribution attempts to only isolate the customers that are on the faulty piece of the circuit and keep the maximum number of customers supplied.

Overcurrent can be very modern protection, and in the correct application, it is perfectly suitable.

Straight overcurrent protection is non-directional. It only can see the magnitude of the AC current. Considering the legacy generation model, was quite adequate, because the source has always been the HV ( Transmission voltage) side heading back to the power station.

I will simplify the actual practice for explanation purposes:

Considering a 4 section radial line 1 > 2 > 3 > 4, if there was a fault on section 3 > 4 , then sections 1 and 2 > 3 would have a longer trip time than section 3. Section 3 > 4 would trip and take out section 4 customers as well (Sorry). But Sections 1 and 2 > 3 would maintain supply.
This works because ESKOM knows it will only put its power stations at end 1 and assumed this would be a forever situation.

So now if end 4 gets an IPP, (independent power producer) it can also be a current source for a fault on section 1 > 2.
But with its non-directional overcurrent relay section, 4 still remains the fastest trip time and the whole bang shoot gets tripped. So now this form of protection can no longer work reliably because it can’t know the direction of the current and grade accordingly.

We now need a relay at each section that has a different trip time dependent on the direction of the power flow, so the trip times can be graded in both directions.
In its simplest form, this is called directional overcurrent. and it is a completely different relay.
It requires both a voltage and a current measurement in order to know which direction the power is flowing. Because it must provide a different graded trip time depending on the direction of the fault current.
So these non-directional overcurrent relays all need to be swapped out and that’s expensive, but also relays at distribution voltage don’t actually get fed at primary values. They are fed by CT’s and VT’s. So now VT’s are required as well and that requires primary plant and that’s very expensive.

In Ireland, we have only installed directional overcurrent relays at the distribution level since 2006. Whether we needed them or not in anticipation of this train coming. We only install the VT’s as they are needed, but all the relays installed at this level are already VT input ready.

Generation at the Distribution level (wind farms and the like), has exceeded conventional Transmission generation for several years now.

There are other issues as well. We have a lot of web farms in Ireland. They are VERY concerned about the security of supply. So much so that they have all got their own let’s say massive “UPS’s”. They monitor the quality of supply from the utility, and if they see a frequency blip, rather than take a risk, they ditch the utility supply.
They opt to self-trip without actually having a fault. Kind of like what 10000 inverters at once would do. So the utility has now lost the faulty load block as well as a considerable chunk of healthy load. Which in turn causes high-frequency issues where a low frequency would be the conventional expectation, and this has the potential to be catastrophic.

So there are two sides to every story, I do think Mr Yelland is either pandering to popular opinion or ill-informed.


Indeed. Chris Yelland is a non government public figure on electricity matters.
But the lack of clarity we have about RE and how it interfaces with the grid is hellava frustrating. I appreciated the dig at CoCT because they have been on the PR course and have been told their networks are the best in the country… Na, they don’t cut it! (I used to work for an IT company that had the same attitude until I got fired)

To add a bit of weight to Yelland’s “outdated model” argument, and Phil’s excellent explanation, PV-inverters have also moved on a bit. We are at the point where a supplier can signal them to help stabilise the grid, improve power factor, etc. But this “outdated” part isn’t really going to help the residential-PV market, because it potentially creates an even bigger burden for compliance (the PV-inverter must now also support these additional new features).

The generation on wind farms can be centrally controlled.
Wind farms also have their own set of issues.
Firstly, and most obviously they can’t be ordered to produce more if there is no wind.
So it is more usually used to curtail generation. What they call “feathering” the turbines.
There are response times involved, so it is a control function and not a protection function.
But along comes the IPP’s all wanting to be treated equally, and all individually deciding what their version of “equally” is. Different size wind farms, built at different times with different government incentives and promises with different production histories etc. You can just imagine the table-thumping demands.

1 Like

@Phil.g00 ,that is a brilliant explanation and well presented!

This is one of the times I also differ from the Chris. He is in general very level headed and uses very good arguments, so in this case I am feeling the nuance to his conclusion is not presented. It must be!

Yes, there are ways to get around the limitation but this will require significant changes to the network (not just on protection, but mostly on the inverter level communications).

I like the Aussie example (Hey, don’t crucify me! They get things right now and again). They enforce inverter to utility communication. They USE the residential inverters for network stability!

For example, CoCT central does have a big voltage deviation problem. I have seen with my own multimeter voltages as low as 195V on one of the phases. (Not tripping due to protection mostly set up to average between phases). Now 195V on a customer supply is dangerous to some equipment. If CoCT was smart about it, they would have sent a Reactive Power signal to the inverters to boost the one phase voltage and balance it out! Magic…

1 Like

Well, blow me down … methinks they “dumbed it down” for non-tech people like me.
Thank you for the explanation.

Methinks it is not as simple as it “sounds” when one has watched SMA/Victron and other brand names videos on mini-grids and how they all interface to keep themselves and the grid they are connected to, stable … with batteries!

Someone said to me the other day, why is Tygervalley not covered in solar panels? All malls for that matter, schools roofs, parking lots… THAT will solve the power outages!

So I shared with him a few thoughts:
Storm moves in over Cpt, solar generation drops off a cliff. Where would the base power now come from, as I’m not aware that the base power generation, Koeberg/Eskom can ramp up that fast? So how do “they” protect the grid?
Or when everyone rushes home as the sun sets and we all start cooking. Where does that power now come from?
Or when CPT has a damn long dark winter, again, the then mothballed stations must come online, having been “switched off” in summer as there was too much production?
Grid and solar generation integration, you need a bit more in place, like along the lines of Elon Musk’s battery farms, or the grid will collapse.

It is not as simple as putting up a few panels and begone the problems.

I was a youngster in a place called Kriel, during the construction of Kriel and Matla power stations. These were South Africa’s first 6 pack super stations.
I was “fairly reliably” informed that load predictions were based on the radio and TV times magazine back then.
One thing I can vouch for first hand is a load graph in an Irish power station foyer. It was taken during a famous Ireland world cup football match. Someone had literally taken the trouble to comment on the various blips on the graph, as to referee decisions, goals and half times etc these where breaks in play where people had put the kettle on en masse.
It is still there today as far as I know.


Very interesting! Based on this explanation, could the follow also cause some “irritations” in a neighbourhood:
Assume it is a fairly affluent neighbourhood with ample solar panels and sizeable inverters in many houses. It is a nice and sunny day. There is little demand in the houses themselves for the large amounts of power being generated (as it is daytime and people are working) and so all the houses send nice current back into the grid.

Would the point that supplies the neighbourhood be able to deal with that amount of current on its supply cables? (Presumably this will be three phase) If not, would this cause the point that supplies the neighbourhood to trip its breaker and then all the houses would be cut off from supply?

If the houses with PV have batteries, sure, not too bad for them. But some might not have, and that would be a little bit of an issue.

It’s a big issue for the utilities! In Germany they initially had incentives for residents to install PV panels on the roof of their houses. Initially the power generated was not used by the residents but was fed back to the utility. They thought that they could then distribute this power as required.
What happened however was that they found they couldn’t cope with this huge amount of erratic power so they then encouraged home owners to use the power themselves.
I think that the more organised utilities these days have to be able to quickly store and draw power from a storage facility, be that huge battery banks or hydro storage dams but this has to be planned up front.
One of the mitigating features of a smart grid with RE power facilities is that the larger the grid is the better they can manage it. The reason is that due to conditions being different in different areas the need for power in one location can be provided by a far away area that has excess power.

But you still sit with the problem @Phil.g00 described that you are generating on the “thin” end of your cables/protection - probably not a great situation to be in if the protection in your system weren’t built with that in mind.

Well the first obvious issue, of course, is that there is impedance in that upstream path, so the voltage on the “small end” will push upwards. But since the cables are already sized for the same amount of energy coming downstream, with a maximum allowed voltage drop which is often the same as the maximum allowed overvoltage, this would likely not be a problem. If everyone sticks at least to the size of their connection, you may see a voltage rise which should be less than 10%.

Depending on the state of the network, you could have wider voltage swings though.

1 Like

See @plonkster 's reply.
The size of the hose limits the flow in both directions. If sufficient water can flow in one direction it can flow in the other direction.
Whether you want it to or not is another question.
Equally, voltage control is a fairly gradual system local change and if a voltage drop during heavy loading can already be dealt with, a rise shouldn’t present an issue. Transformers at certain levels automatically change taps throughout the system.
Transformer tap-changers are usually rated around the nominal tap, and have as much range to raise voltage as they have to drop it.

Something that may be done where there is poor infrastructure is to change a tap on the transformer in order to compensate for voltage drop during high demand. This could well cause the voltage at low load to be in the low 240V bracket. Add a bunch of PV-inverters to that, and it may well exceed 253V (where NRS097 tells you to disconnect) sooner than expected.

There is also the fact that quite often affluent neighbourhoods are also old neighbourhoods. In a study about the habits of (dollar) millionaires, it appears they buy old houses in old neighbourhoods close to good schools. They buy antique furniture from solid wood and drive old crappy (but reliable) cars like Toyotas and Hondas. Apparently… so affluent may well go with an aged infrastructure more often than you think :slight_smile:

These will only be installed at the high & medium voltage substations. Your local substation that finally converts the voltage to 380V 3 phase will only have a manual tap (citation needed)
This means that the regulation is done as per voltages in the substations, not at the point of use.
So they estimate the voltage drop in the ‘last mile’

Hence the “at certain levels” distinction.
There is definitely not a chap that runs around measuring individual 380V supplies as load fluctuates.
I am battling a bit to understand your point.
I think the gist of what you are saying is that it is the utility’s responsibility to be able to adapt voltages at a more granular level to deal with domestic solar inverters.

I don’t think this is a fair expectation.

Yes, downstream voltages are affected by upstream voltages and upstream voltages are affected by downstream load. It is a chain and each player has a role.

A customer has a contractual right to a supply voltage level within certain tolerances.
In order to satisfy that obligation, the upstream Transmission and Distribution networks have a contractual right to expect certain voltage tolerances from their suppliers, namely Power stations.
This is reasonable and to an average customer, that relationship has been beneficial.

So it follows that a domestic customer can demand a certain voltage, but when that same customer wears a generator hat they are duty bound to deliver a certain voltage.
Once, Joe Soap gets himself a solar inverter he is no longer just a customer and responsibility for that network voltage falls in part on himself.

1 Like

Do you know what the domestic electricity user can expect (max/min voltage?)

I have seen it, but I can’t remember ZA’s. 10% ish normally, it is easy to look up.
It will be the basis of the ZA’s grid code (NRS whatever) selection in inverter compliance settings.
This is why I like standards, there are so many to choose from.