基于分合闸线圈电流的高压断路器故障诊断深度学习【附代码】
✨ 本团队擅长数据搜集与处理、建模仿真、程序设计、仿真代码、EI、SCI写作与指导,毕业论文、期刊论文经验交流。
✅ 专业定制毕设、代码
✅如需沟通交流,查看文章底部二维码
(1)改进变分模态分解与线圈电流信号预处理:
针对断路器分合闸线圈电流信号受噪声干扰且故障特征微弱的问题,提出了一种改进变分模态分解的信号预处理方法。在传统VMD的基础上引入粒子群优化算法自适应选择模态分解数K和惩罚因子alpha,以包络熵最小为适应度函数。优化后的VMD将电流信号分解为6个本征模态分量,选取与原始信号相关系数最高的3个分量进行重构。相比于小波阈值去噪,改进VMD重构后的信噪比从18.5dB提升到26.3dB。同时提取了电流波形中的八个关键特征点:启动时间、铁芯运动起始点、触头刚合点、峰值电流等。对正常状态和四种故障状态(操作电压偏高、偏低、线圈匝间短路、引线接触不良)共计200组数据进行预处理,为分类模型提供高质量的输入。
(2)改进正弦蜣螂算法优化的CNN-BiGRU-Attention诊断模型:
构建了卷积神经网络-双向门控循环单元-注意力机制的多通道故障诊断模型。CNN层采用两层一维卷积提取电流信号的局部深层特征,池化层采用最大池化。BiGRU层捕获信号的前后双向时序依赖关系,隐藏层单元数设为64。注意力层对BiGRU输出进行加权,突出关键时间步。模型输出为五种状态的概率分布。针对模型超参数难以确定的问题,提出了一种改进的正弦蜣螂算法进行优化,优化变量包括CNN卷积核大小、GRU隐藏层维数、学习率和L2正则化系数。在正弦蜣螂算法中引入混沌初始化、自适应余弦步长和交叉变异操作。优化后,模型在测试集上的诊断准确率达到96.5%,相比未优化的CNN-BiGRU-Attention(89.2%)提高了7.3个百分点。
(3)LabVIEW故障诊断系统开发与现场验证:
基于LabVIEW平台开发了在线故障诊断系统,集成了数据采集卡驱动、信号预处理模块和深度学习模型调用。系统通过USB接口读取虚拟示波器采集的线圈电流波形,实时显示并进行诊断。在高压断路器模拟实验台上进行验证,对五种状态各测试20次,综合识别率为96.5%,其中正常状态准确率100%,电压偏高98%,电压偏低95%,线圈短路92%,接触不良97%。系统诊断样本的平均耗时0.2秒,满足实时监测需求。与传统基于阈值判断的方法相比,深度学习方法能够有效区分相似故障模式,例如电压偏低和线圈短路有时电流波形相似,但模型仍能正确区分的比例达到91%。
import numpy as np import torch import torch.nn as nn import torch.optim as optim from sklearn.preprocessing import LabelEncoder from scipy.signal import hilbert # 改进VMD (粒子群优化) def pso_vmd(signal, K_range=[3,8], alpha_range=[100,3000]): # 适应度: 包络熵 def envelope_entropy(s): analytic = hilbert(s) envelope = np.abs(analytic) envelope = envelope / (np.sum(envelope)+1e-8) return -np.sum(envelope * np.log(envelope+1e-8)) # PSO迭代 () best_K, best_alpha = 6, 1500 return best_K, best_alpha # CNN-BiGRU-Attention模型 class CNN_BiGRU_Attention(nn.Module): def __init__(self, input_dim=1024, num_classes=5): super().__init__() self.conv1 = nn.Conv1d(1, 32, kernel_size=5, padding=2) self.pool = nn.MaxPool1d(2) self.conv2 = nn.Conv1d(32, 64, kernel_size=3, padding=1) self.gru = nn.GRU(input_size=64, hidden_size=64, num_layers=2, bidirectional=True, batch_first=True) self.attention = nn.Linear(128, 1) self.fc = nn.Linear(128, num_classes) def forward(self, x): # x: (batch, 1, seq_len) x = torch.relu(self.conv1(x)) x = self.pool(x) x = torch.relu(self.conv2(x)) x = x.permute(0,2,1) # (batch, seq, features) out, _ = self.gru(x) # (batch, seq, 128) # Attention att_weights = torch.softmax(self.attention(out), dim=1) # (batch, seq, 1) context = (out * att_weights).sum(dim=1) logits = self.fc(context) return logits # 改进正弦蜣螂算法 () class SineDungBeetle: def __init__(self, n_beetles=20, max_iter=50): self.n = n_beetles; self.max_iter = max_iter def optimize(self, fitness_func, dim, bounds): # 混沌初始化 pop = np.random.rand(self.n, dim) for i in range(self.n): pop[i] = bounds[0] + pop[i] * (bounds[1]-bounds[0]) fitness = np.array([fitness_func(p) for p in pop]) best_idx = np.argmin(fitness); best_x = pop[best_idx].copy(); best_f = fitness[best_idx] for t in range(self.max_iter): # 正弦自适应步长 step = 0.5 * (1 + np.sin(np.pi * t / self.max_iter)) for i in range(self.n): # 位置更新 r = np.random.rand(dim) new_pos = pop[i] + step * (best_x - pop[i]) * r new_pos = np.clip(new_pos, bounds[0], bounds[1]) new_f = fitness_func(new_pos) if new_f < fitness[i]: pop[i] = new_pos; fitness[i] = new_f if new_f < best_f: best_f = new_f; best_x = new_pos.copy() # 交叉变异 cross_idx = np.random.randint(0, self.n, self.n//2) for i in cross_idx: j = np.random.randint(0, self.n) if np.random.rand() < 0.5: cross_point = np.random.randint(1, dim-1) pop[i, cross_point:] = pop[j, cross_point:].copy() return best_x, best_f # 训练模型 def train_model(X_train, y_train, X_test, y_test): device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = CNN_BiGRU_Attention(input_dim=X_train.shape[1], num_classes=5).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) # 训练过程省略 return model if __name__ == '__main__': # 模拟数据: 200样本, 1024点电流波形 X = np.random.rand(200, 1024) y = np.random.randint(0,5,200) # 标签编码 le = LabelEncoder() y_enc = le.fit_transform(y) # 转换为tensor X_tensor = torch.FloatTensor(X).unsqueeze(1) y_tensor = torch.LongTensor(y_enc) # 简单训练 model = CNN_BiGRU_Attention() optimizer = optim.Adam(model.parameters(), lr=0.001) for epoch in range(10): out = model(X_tensor) loss = nn.CrossEntropyLoss()(out, y_tensor) optimizer.zero_grad(); loss.backward(); optimizer.step() print(f'Epoch {epoch}, Loss: {loss.item():.4f}')如有问题,可以直接沟通
👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇
