A question was asked about the processing time and transfer time in IEC 61850, and whether the processing time was included in the standard. My response was:

The test specification in section 7.2.1 of IEC 61850-10 breaks down the total transfer time into:

  • ta: the time required for the sending device to transmit the process value,
  • tb: the time required for the network to deliver the message, and
  • tc: the time required for the receiving device to deliver the value to its process logic.

It then states:

The measured output (input) latency shall be less than or equal to 40 % of the total transmission time defined for the corresponding message type in IEC 61850-5 Subclause 13.7. NOTE 1 The value of 40 % on each end of the connection leaves over 20 % for network latencies.

This maximum time applies mainly to the message types 1 (Fast messages) and 4 (Raw data messages); these messages make use of the priority mechanisms of the networks components defined in IEC 61850-8-1 and IEC 61850-9-2. Messages of type 2 may be assigned to a high priority.

The question “What’s the difference between bay ,process and station level, why and when we use them?” was asked. My response was:

@Mudassir, if you have access to 61850 standards then the best place to get the formal definitions is from IEC 61850-1. The station, bay and process levels are defined in section 6.2, with Figure 2 showing it graphically. To extract the definitions:

  • Process level devices are typically remote I/Os, intelligent sensors and actuators
  • Bay level devices consist of control, protection or monitoring units per bay.
  • Station level devices consist of the station computer with a database, the operator’s workplace, interfaces for remote communication, etc

Basically the process level is where the control system interfaces with tangible hardware: current transformers, isolators/disconnectors, circuit breakers, transformer temperature sensors and so forth.

The bay level is where the substation control system looks after a piece of plant as a whole (e.g. transformer, feeder), and as Grant mentions, an understanding of the intricacies here will require an understanding of power systems control and protection.

The IT and telecoms equipment in a substation sits at the station level, but still requires power systems knowledge to understand ‘whole of station’ function like CB failure protection, bus protection, control philosophies (equipment) and how the humans will interact with the substation (remote operation or local?)

The IEC 61850 family are designed to be read in sequence. -1, then -2 and so forth. This means that you’re only getting to the nitty-gritty of GOOSE, MMS and sampled values in -8-1 and -9-2 once you’ve read about the object model, how logical nodes are used and the use of ACSI to communicate between nodes. Avoid jumping in at GOOSE/SV, otherwise you risk seeing 61850 as ‘just another protocol’. It is a system and should be thought of as such.

There are some good introductions to 61850 out there on the internet, so use your favourite web search engine and see what you can find. I’d suggest looking at the works from 2004-2008 first as this is when 61850 was kicking off and the good introductory material was published. PAC World and GE’s Protection & Control Journal (now called the Grid Modernization Journal) are well worth looking at.

The question was asked whether IEC 61850-9-2 Process Bus will be a future trend in the Utilities market, even for indoor substations? My response was:

I believe there is a strong benefit for indoor GIS substations too. 9-2 process buses facilitate vendor independent connections to non-conventional instrument transformers (NCITs), such as Rogowski coils. Rogowski coils are used in ABB’s combi-sensor for GIS and iPASS, and these have been converted to 9-2LE operation in Queensland.

The cable runs are shorter in a GIS substation, but the process bus advantages of combining more sensors and monitoring into a single fibre optic pair from each bay can still be realised. The largest GIS switchroom I’ve seen was around 320m long and had 345kV and 169kV GIS buses at the Taichung Power Station in Taiwan. The linear design (compared to a star/radial design in an AIS yard) means that reducing multi-core copper cabling is still worthwhile.

You have some preconceptions there that are not entirely correct:

  1. Yes, redundancy is important. This can be solved two ways. The first is to use PRP or HSR to have redundant network connections from the MU to the IEDs. The second is to use two MUs (from different vendors). This is similar to the Main1/Main2 or X/Y approached used now. All an MU is doing in a conventional substation is moving the ADCs into the switchyar. As Maciej pointed out, this has a BIG safety benefit.
  2. Signal delay does affect performance, but this is why the performance classes are defined in IEC61850-5. For a transmission substation there is an allowance of 3 ms to get the measurement into the ‘thinking’ part of the IED. This includes the digitising and publishing time (40%), network transmission (20%) and receiving and processing (40%). My research found that a transformer relay that could use CTs or SV as inputs responded 0.4 ms slower with SV than with CTs. This paper has just been accepted and I have linked to it below.
  3. Yes, LAN design is required, just as proper cable design (selecting cross sectional area, terminal types etc) is required for a network based solution. Provided the network traffic does not exceed the LAN limit (e.g. 100Mb/s) the ‘slow down’ is simply due to queuing effects and the largest delay will be just under 250 µs (50 Hz) or 208 µs (60 Hz). Even with a ‘plug and pray’ approach process bus networks are surprisingly robust. I tried really hard to break my test network and overloading traffic (injecting at 1 Gb/s) was really the only way I achieved it. At the risk of blatant self promotion, I would like to link to my research papers, which are experimental assessments of process bus operation:

@Michael, You are quite correct in that the NCIT to MU is vendor specific. In some cases, such as SDO’s Optical CT the secondary converter with the light source is in the same box as the merging unit. ABB’s iPASS solution is a retrofit and uses the MVB synchronous serial protocol that used to go to ABB’s relays to supply data to the MU. The benefit of using 61850-9-2 as the IED interface is that the selection of instrument transformer and protection IEDs can be decoupled. In the past you had to select matched pairs, but this is no longer the case. There will always be a vendor-dependent section, because that is how things are made. I doubt anybody would expect an SDO/Arteche secondary converter to drive a NxtPhase/Alstom sensor head. There is nothing to stop a utility installing sensor heads from two different vendors, running to two separate secondary converters, then to two separate merging units, and then onto two separate LANs. With conventional CTs the secondary torroids for X & Y are independent, they’re just in the same bushing. With OCTs the HV apparatus is much smaller, so you’re able to achive the same thing. As for analogue interfaces, it seems that very few people adopted them. IEEE Std C37.92 defines a 4 Vrms nominal for 1 pu voltage and 200 mVrms nominal for 1 pu current. It is certainly easier to generate these signals than 110 Vrms and 1 Arms, but when the OCT and NCIT measurement engines are digital it makes sense to publish the information in true digital form. This reduces noise in the system and allows the use of fibre optic cables for communication. The NCITs that support high power outputs generally do so with an external amplifier module, and this increases the cost of the system.

You might see more of this technology in Oil & Gas. I’m now in this industry and we have 132/33 kV substations to power electrical driven gas compressors. Sometimes it is more economical to buy power from the grid than to burn valuable fuel gas in a turbine to drive a compressor.

I’ve been on the hunt for a really good keyboard for some time. I bought a cheap Logitech one to go with my new desktop PC, but discovered that the arrangement is non standard. While it has all the keys (unlike some of the Apple keyboards) they’re in strange places and crammed in together.

I found out about Unicomp who are continuing the line of quality keyboards from IBM and Lexmark. They are still making the keyboards in Lexington USA (rather than Lexington PRC). The shipping is quite steep as the keyboards weigh quite a lot, but I was able to find one on eBay secondhand.

enter image description here

I ended up taking the plunge and bought a brand new ‘SpaceSaver’ in black and grey from Unicomp, but that was sent by FedEx (and the FedEx speed is making the USPS and Australia Post look really good — I doubt I will ever use FedEx again).

The key feel is quite heavy, but I think that promotes better key striking and I keep my hands up high rather than just bending my fingers. This should be better for me in the long run.

I was warned from other reviews that the keyboard is quite noisy. I certainly underestimated the noise level. I will be keeping the Logitech handy (got to love USB for allowing multiple keyboards on a PC) for late night gaming or typing. I had intended for the big beige keyboard (this one) to be used at work and for the nice small black keyboard to be used at home.

Well, not to be. The ‘bl**dy noisy’ keyboard has been banished from work. I guess the sound of hard work is too much for some, eh Al :-) ? My wife liked the keyboard when I was writing up my thesis because she could tell that I was working (and not slacking off), and didn't walk in and break my chain of thought.

My parting thoughts are that the keyboard takes some getting used to, but it won’t slide around the desk, looks ‘old school’ and will probably outlast any piece of computing hardware you have in your house.