基于yolov5的fps自瞄
注:本外挂实际意义有限(甚至都算不上外挂),因为首先是因为yolov5的识别速度,在游戏中并不是直接将鼠标准星移到锁定目标的头上,因为通过yolov5的话,我们的鼠标移动是通过win32api,这种瞄准方式有个问题是我们很难直接通过直接移动鼠标像素点的方式锁定目标,而且还会因为游戏灵敏度的影响导致瞄准困难,所以我使用了是多次瞄准的方式,就是如果准星离头比较远的话,就一次移动100像素点左右,如果离得近的话就一次移动10像素点左右,这种方法的缺点就是我们瞄准的速度很慢,大概是1秒3帧左右,我的电脑是笔记本电脑,这种情况下1秒3帧已经是笔电的极限了
一、基本环境
系统环境:ubuntu 22.04
python环境:python 3.10
二、基本原理
- 对屏幕截图进行分析
- 使用yolov5对目标进行识别
- 得到识别目标的位置,计算距离
- 然后使用win32api对鼠标进行控制
- 通过多次移动鼠标准星瞄准目标
经过测试如果目标是静止的那么效果还算可以,如果是移动的那就玩了个大蛋。
三、重要代码讲解
这个是自动瞄准的代码
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
| import threading
import pythoncom
from DetectEnemy import * import PyHook3 as pyhook
right_button_press = False
def AutoAim(): global right_button_press while True: if right_button_press: pic = save_pictures() result = detect(pic) person = get_person(result) if not person: continue location = shooting_location(person) print(location) mouse_move(location)
else: continue
def funcRightDown(event): global right_button_press if (event.MessageName != "mouse_move"): print("AutoAiming start") if right_button_press: right_button_press=False else: right_button_press = True return True
def funcRightUp(event): global right_button_press if (event.MessageName != "mouse_move"): print("AutoAiming stop") right_button_press = False return True
auto_aiming_thread = threading.Thread(target=AutoAim) MouseManger = pyhook.HookManager() if __name__ == '__main__': auto_aiming_thread.start() MouseManger.MouseRightDown = funcRightDown MouseManger.HookMouse() pythoncom.PumpMessages()
|
这个是DetectEnemy.py的代码
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104
| import mss as mss import numpy as np import torch import cv2 import win32api import win32con import win32gui import win32print from pymouse import PyMouse
device = 'cuda:0' if torch.cuda.is_available() else 'cpu' model = torch.hub.load('ultralytics\yolov5', 'yolov5n', source='local').to(device)
m = mss.mss() mt = mss.tools
def detect(img): result = model(img).pandas().xyxy[0].to_dict('index') return result
def save_pictures(): hDC = win32gui.GetDC(0) w = win32print.GetDeviceCaps(hDC, win32con.DESKTOPHORZRES) h = win32print.GetDeviceCaps(hDC, win32con.DESKTOPVERTRES) img = m.grab((0, 0, w, h)) mt.to_png(img.rgb, img.size, 6, "pic/cs.png") return "pic/cs.png"
def get_person(result: dict): person = {} person_max = 0 for key in result.keys(): if result[key]["name"] == "person": xmin = int(result[key]['xmin']) ymin = int(result[key]['ymin']) xmax = int(result[key]['xmax']) ymax = int(result[key]['ymax'])
if (xmax - xmin) * (ymax - ymin) > person_max: person = result[key] return person
def shooting_location(person: dict): locationx = (person['xmin'] + person['xmax']) / 2 locationy = (person['ymin'] + person['ymax']) / 2-(person['ymax']-person['ymin'])/4 location = {"x": locationx, 'y': locationy} return location
def mouse_move(location: dict): l1=location['x'] l2=location['y'] mouse_location = PyMouse().position() if abs(mouse_location[0] - l1) > 150: x = 100 if mouse_location[0] - l1 > 0: x = -100 elif abs(mouse_location[0] - l1) > 80: x = 40 if mouse_location[0] - l1 > 0: x = -40 else: x = 10 if mouse_location[0] - l1 > 0: x = -10
if abs(mouse_location[1] - l2) > 150: y = 100 if mouse_location[1] - l2 > 0: y = -100 elif abs(mouse_location[1] - l2) > 80: y = 40 if mouse_location[1] - l2 > 0: y = -40 else: y = 10 if mouse_location[1] - l2 > 0: y = -10 win32api.mouse_event(win32con.MOUSEEVENTF_MOVE, x, y)
def model_show(result,pic): picture = cv2.imread(pic) print(type(result[0]['xmin'])) xmin = int(result[0]['xmin']) ymin = int(result[0]['ymin']) xmax = int(result[0]['xmax']) ymax = int(result[0]['ymax']) cv2.rectangle(picture, (xmin, ymin), (xmax, ymax), (0, 0, 255), 2) cv2.imshow("cs", picture) cv2.imwrite('pic/pic-1-test.png', picture) cv2.waitKey(0)
|
这个model_show是可以展示出一个图片和识别框的,到时候可以把AutoAiming中的那个注释恢复就可以用了,大概就是这样。
四、model_show的展示

五、简单的补充
这个效果真的不好,希望各位看了不要骂我,不过跟着学学联系一下神经网络还是不错的,我主要是想联系联系ML方法使用以方便开学后继续跟老师做项目。