The question "Which relay brand is good?" was asked on LinkedIn. A few specific answers were put forth. This is my response, which has been quite well received:

Isn't the best relay the relay that meets your specific needs in the most cost effective and reliable manner? I know this is a non-answer, but every case is going to be different. I've used many of the different brands, and some 'felt' better to me, but other engineers and technicians prefer others.

I reckon the reliability and protection functions are going to be much of a muchness — they'll work, and they'll keep working. What distinguishes the different brands are:

  1. Local support. If you have a problem can you get help in your country in a timely fashion? No point having the best relay if you need to schedule a visit from a support engineer from three countries away.
  2. Free access to programming tools. Some vendors still insist of charging for their programming tools. WHY? Surely the relay is the 'dongle'! My preference is to use the tools that are free to download, and that are easy to use for programming and diagnostics (e.g. event & waveform downloads). Do these tools support data warehouses and revision control systems?
  3. Panel space & wiring. Some relays are HUGE — is this really needed. Can you easily remove a terminal block or slide out the relay if a replacement is needed? Integration flexibility. There is huge variation in how IEC 61850 is implemented by vendors. Do the relays work with logical nodes 'natively' or is IEC 61850 seen as another bolt on protocol like DNP3? Do they support enough report control blocks? Can a dataset be used by more than one RCB or Goose control block?
  4. Documentation — this is linked to (4). Can you get the documents you need to design a system from the vendor and without the need to install all the programming tools? Getting hold of application manuals, MICS, PICS, PIXIT etc all make life easier at the design stage.
  5. Cost. Utilities have budgets to meet, and there is significant variation in pricing (lets not talk details in this forum though). Remember to include support agreements, programming tools, training etc in the costs.

Just my thoughts, and I'd be interested if others expanded on what they used when deciding who to go with for projects or 'panel of supplier' contracts.

The question "Does anyone know if there are guidelines or industry standard for commissioning substation LAN which focuses on Ethernet switches and possibly routers?"" was asked. My response was:

Yang, RFC2544 has a series of tests for benchmarking the performance of interconnect devices like Ethernet switches. The test protocols in there might be a good start for proving network performance.

Any 61850-based substation FAT or SAT should be for the system, not just for the IEDs.

Without Ethernet you don’t have a substation control/protection system. This testing could be system-level performance-based, making sure the inter-trips and controls work within the required time-frame and reliability. I like component testing, but this is more targeted at type testing or design testing. However, if the substation control functional specification has network performance requirements (latency, throughput etc) then this should be tested at the FAT and SAT too.

IEC 61850-4 has the breakdown of what is done and when. FAT and SAT are described in section 7.3.6 and 7.3.7 respectively.

Hi Yang, thanks for the reminder about Y.1564. I had a nagging feeling there was another test spec out there.

I haven’t seen Y.1564 used in substations, but that doesn’t mean it shouldn’t be considered. I think there is much the power industry can learn from Telecoms.

I think a ping test is a very poor test. Firstly it only tests the IP/Management VLAN and not the others that might be in place. Secondly the performance metrics are coarse (millisec times). A more thorough test would include packet injection and capture to make sure frames are not lost at close to line limits. In addition low priority traffic should be injected (unicast and multicast) while protection timing (if GOOSE used for inter-trips) is conducted. Similarly, responsiveness to controls should be tested with some background traffic.

My philosophy is that I want to take the system to breaking point and figure out the safety margin from there. If you don’t break it you don’t know how close to that point you are.

The question was asked "What is the basic difference between PRP (Parallel Redundancy Protocol) and HSR (High availability Seamless Redundancy)". My response was:

Very simply PRP is a duplicate network of whatever design you have (could be rings, mesh or star), while HSR is a ring network. I've seen PRP used in a star configuration, and I think that is its real strength as it is simple.

HSR is the 'zero delay' alternative to RSTP for rings, but does require the use of Redboxes or devices with native support. Quite a few switch manufacturers have embedded modules that provide HSR or PRP functions, or you can purchase code for FPGAs from a number of vendors and use it directly in your device.

@Rod, perhaps in the utility world duplicate protection is the norm, but this is not the case for distribution and industrial applications.

HSR and PRP have application where network based signalling is used and there is a need to maintain the Ethernet switches. With copper signalling each connection is independent, but with Ethernet a switch will affect multiple connections.

RSTP is ambiguous enough with each vendor having differing parameters and their own tweaks for speed. HSR gives more deterministic performance, and PRP avoids slow recovery times with multiple meshes. A second switch is cheaper than duplicating all the protection, and many vendors are supporting HSR/PRP even in their 'low end' devices.

@Rod, I agree that an engineering design (requirements spec, cost/benefit, failure mode study) should be undertaken for any substation LAN design. Unfortunately this doesn't happen.

For small rings RSTP can work fine, and quite a few vendors have dual port IEDs so switches are not required. The hardware cost to go to HSR is minimal (some FPGA code) since there are already two ports on the device. The 'integrated' switch for ring applications has been around for some time (HiPER Ring, MRP, RSTP) — I don't think HSR/PRP has been driving it. For large rings MRP and HSR have definite speed advantages by knowing the topology is a ring. RSTP is more flexible, but that flexibility slows things down.

BTW, as far as integrating protection into a MU — great idea, shame that one of the large European manufacturers is trying to patent this as their own clever idea.

@Juan, differential protection can work through the exchange of phasors too. 64kb/s channels (C37.94 or G.703) are not really enough for raw values, so IEDs process the samples to get the data rate down to a level that can be transmitted.

I think that “Protection Computers” will ultimately do what you propose. ABB already do this with protection of their SVCs, and a Dutch company has a range of products for distribution based on distributed sampling and central protection (but not using 61850). I forget the name at the moment. The substation of the future will have merging units in the field, a few Ethernet switches and two protection computers to do the thinking. The control room can be replaced with a telephone box sized panel!

The question was asked "Does anyone know are there any testing procedures, methodology or testing standards for GOOSE Performance in 61850?". My response was:

The test procedures and methods are given in IEC 61850-10.

The UCAIug also has some test methods, but they are buried in their Sharepoint site. The following tests are listed:

That should give you something to work from, and might avoid the cost of purchasing IEC 61850-10.

The question was asked: "Is is possible to connect different Ethernet switches (Manufacturer / Configuration / Specification) in a ring network ? If yes, then how could be the performance of data transfer?". My response was:

There are a number of international standards that deal with ring topologies. RSTP and MSTP are IEEE standards (802.1D and 802.1Q respectively) that work across vendors. These make a 'virtual break' in the ring that can close when a normally close link opens.

A more reliable (but more complex) alternative is HSR – high-availability seamless redundancy – IEC 62439-3, is vendor independent.

There are vendor specific modifications of standards, such as RuggedCom’s eRSTP that has faster ring repair, or Hirschmann’s HiPER Ring that gives faster ring recovery than RSTP. The Media Redundancy Protocol (MRP), IEC 62439-2, is a derivative of HiPER Ring.

One downside with RSTP is that recovery time increases with the number of devices and especially with the number of meshes. Another is that default parameters may vary between vendors so some tuning is required. RSTP does allow more flexibility in network design than with MRP (which is exclusively for rings).

Performance with HSR and MRP will be deterministic (0 ms for HSR and a defined setting for MRP). RSTP performance will depend on the configured parameters and number of devices, so I would recommend testing of the final design. Software simulation is also a good idea, and something that I don’t see much of in industrial network design.

The real benefit of using a function specific logical node (e.g. PTRC, PDIF etc) is that the LN conveys a lot of semantic information.

If you use GGIO you need to maintain a parallel set of document that describe the purpose of each GGIO. This is really no benefit over DNP3 or Modbus points. If I am subscribing to messages then I know that I have the transduced power (MW) value when I connect CVMMXN1.MX.Watt.mag or PriFouMMXU1.MX.TotW. An analogue GGIO simply doesn’t do this.

Which has more meaning: T3WPDIF1.ST.Op or BOUT_GGIO1.ST.Ind.stVal? I can tell immediately that I've connected the transformer differential trip with the 'proper' LN without needing to refer to anything else. The devices don't care what is subscribed to in the SCD file, but it sure helps reduce the chance of the human making a mistake.

These data attributes come from a real IEC 61860 system. The only reason I used GGIO for outputs is that they came from an RTDS that treats GOOSE output as a binary output card. The IEDs all supported 'real' LNs in their published GOOSE datasets. Making sure I had the right GGIO when setting up IED subscriptions was a pain in the neck—setting up the RTDS subscriptions was very easy by comparison.

IEC 61850 is all about systems, not simply communications between devices.

A question was asked about the processing time and transfer time in IEC 61850, and whether the processing time was included in the standard. My response was:

The test specification in section 7.2.1 of IEC 61850-10 breaks down the total transfer time into:

  • ta: the time required for the sending device to transmit the process value,
  • tb: the time required for the network to deliver the message, and
  • tc: the time required for the receiving device to deliver the value to its process logic.

It then states:

The measured output (input) latency shall be less than or equal to 40 % of the total transmission time defined for the corresponding message type in IEC 61850-5 Subclause 13.7. NOTE 1 The value of 40 % on each end of the connection leaves over 20 % for network latencies.

This maximum time applies mainly to the message types 1 (Fast messages) and 4 (Raw data messages); these messages make use of the priority mechanisms of the networks components defined in IEC 61850-8-1 and IEC 61850-9-2. Messages of type 2 may be assigned to a high priority.

The question “What’s the difference between bay ,process and station level, why and when we use them?” was asked. My response was:

@Mudassir, if you have access to 61850 standards then the best place to get the formal definitions is from IEC 61850-1. The station, bay and process levels are defined in section 6.2, with Figure 2 showing it graphically. To extract the definitions:

  • Process level devices are typically remote I/Os, intelligent sensors and actuators
  • Bay level devices consist of control, protection or monitoring units per bay.
  • Station level devices consist of the station computer with a database, the operator’s workplace, interfaces for remote communication, etc

Basically the process level is where the control system interfaces with tangible hardware: current transformers, isolators/disconnectors, circuit breakers, transformer temperature sensors and so forth.

The bay level is where the substation control system looks after a piece of plant as a whole (e.g. transformer, feeder), and as Grant mentions, an understanding of the intricacies here will require an understanding of power systems control and protection.

The IT and telecoms equipment in a substation sits at the station level, but still requires power systems knowledge to understand ‘whole of station’ function like CB failure protection, bus protection, control philosophies (equipment) and how the humans will interact with the substation (remote operation or local?)

The IEC 61850 family are designed to be read in sequence. -1, then -2 and so forth. This means that you’re only getting to the nitty-gritty of GOOSE, MMS and sampled values in -8-1 and -9-2 once you’ve read about the object model, how logical nodes are used and the use of ACSI to communicate between nodes. Avoid jumping in at GOOSE/SV, otherwise you risk seeing 61850 as ‘just another protocol’. It is a system and should be thought of as such.

There are some good introductions to 61850 out there on the internet, so use your favourite web search engine and see what you can find. I’d suggest looking at the works from 2004-2008 first as this is when 61850 was kicking off and the good introductory material was published. PAC World and GE’s Protection & Control Journal (now called the Grid Modernization Journal) are well worth looking at.

The question was asked whether IEC 61850-9-2 Process Bus will be a future trend in the Utilities market, even for indoor substations? My response was:

I believe there is a strong benefit for indoor GIS substations too. 9-2 process buses facilitate vendor independent connections to non-conventional instrument transformers (NCITs), such as Rogowski coils. Rogowski coils are used in ABB’s combi-sensor for GIS and iPASS, and these have been converted to 9-2LE operation in Queensland.

The cable runs are shorter in a GIS substation, but the process bus advantages of combining more sensors and monitoring into a single fibre optic pair from each bay can still be realised. The largest GIS switchroom I’ve seen was around 320m long and had 345kV and 169kV GIS buses at the Taichung Power Station in Taiwan. The linear design (compared to a star/radial design in an AIS yard) means that reducing multi-core copper cabling is still worthwhile.

You have some preconceptions there that are not entirely correct:

  1. Yes, redundancy is important. This can be solved two ways. The first is to use PRP or HSR to have redundant network connections from the MU to the IEDs. The second is to use two MUs (from different vendors). This is similar to the Main1/Main2 or X/Y approached used now. All an MU is doing in a conventional substation is moving the ADCs into the switchyar. As Maciej pointed out, this has a BIG safety benefit.
  2. Signal delay does affect performance, but this is why the performance classes are defined in IEC61850-5. For a transmission substation there is an allowance of 3 ms to get the measurement into the ‘thinking’ part of the IED. This includes the digitising and publishing time (40%), network transmission (20%) and receiving and processing (40%). My research found that a transformer relay that could use CTs or SV as inputs responded 0.4 ms slower with SV than with CTs. This paper has just been accepted and I have linked to it below.
  3. Yes, LAN design is required, just as proper cable design (selecting cross sectional area, terminal types etc) is required for a network based solution. Provided the network traffic does not exceed the LAN limit (e.g. 100Mb/s) the ‘slow down’ is simply due to queuing effects and the largest delay will be just under 250 µs (50 Hz) or 208 µs (60 Hz). Even with a ‘plug and pray’ approach process bus networks are surprisingly robust. I tried really hard to break my test network and overloading traffic (injecting at 1 Gb/s) was really the only way I achieved it. At the risk of blatant self promotion, I would like to link to my research papers, which are experimental assessments of process bus operation:

@Michael, You are quite correct in that the NCIT to MU is vendor specific. In some cases, such as SDO’s Optical CT the secondary converter with the light source is in the same box as the merging unit. ABB’s iPASS solution is a retrofit and uses the MVB synchronous serial protocol that used to go to ABB’s relays to supply data to the MU. The benefit of using 61850-9-2 as the IED interface is that the selection of instrument transformer and protection IEDs can be decoupled. In the past you had to select matched pairs, but this is no longer the case. There will always be a vendor-dependent section, because that is how things are made. I doubt anybody would expect an SDO/Arteche secondary converter to drive a NxtPhase/Alstom sensor head. There is nothing to stop a utility installing sensor heads from two different vendors, running to two separate secondary converters, then to two separate merging units, and then onto two separate LANs. With conventional CTs the secondary torroids for X & Y are independent, they’re just in the same bushing. With OCTs the HV apparatus is much smaller, so you’re able to achive the same thing. As for analogue interfaces, it seems that very few people adopted them. IEEE Std C37.92 defines a 4 Vrms nominal for 1 pu voltage and 200 mVrms nominal for 1 pu current. It is certainly easier to generate these signals than 110 Vrms and 1 Arms, but when the OCT and NCIT measurement engines are digital it makes sense to publish the information in true digital form. This reduces noise in the system and allows the use of fibre optic cables for communication. The NCITs that support high power outputs generally do so with an external amplifier module, and this increases the cost of the system.

You might see more of this technology in Oil & Gas. I’m now in this industry and we have 132/33 kV substations to power electrical driven gas compressors. Sometimes it is more economical to buy power from the grid than to burn valuable fuel gas in a turbine to drive a compressor.