1.M7621A 小米R3G cpu频率880MHZ,coremark 5399
2.小马软路由,N3710 coremark 36076
3.Intel(R) Celeron(R) CPU N2807 @ 1.58GHz : 2 Core 2 Thread (CpuMark : 15075.011028 Scores)
4.QNAP 301w IPQ807x/AP-HK01-C1 (CpuMark : 27916.567531 Scores)
分类目录归档:openwrt
各型号软路由跑分数据
常用域名记录
bilibili.com
hdslb.com
bilibili.cn
bilivideo.com
bilivideo.cn
biliapi.com
biliapi.net
apple.com
aaplimg.com
mzstatic.com
icloud.com
akamaiedge.net
digicert.com
ixigua.com
toutiao.com
douyin.com
tmall.com
taobao.com
alimama.com
alipay.com
wifizoo.net
qingshuxuetang.com
chinafix.com
oray.com
cpu-world.com
docker.io
cpu-upgrade.com
lastpass.com
intel.com
acwifi.net
synology.com
chinafix.com
简单记录一下青龙面板的设置
如何部署就不赘述了,网上很多文章都有介绍。
今天早上发现自制的黑铁威马NAS的内存占用达到了70%,(安装的是一根2G的内存条),再一看发现是docker的容器很吃内存。
我运行了一个librespeed测速服务器,一个自动签到,一个青龙面板。可以看到青龙面板和自动签到就占用了接近1.2GB的内存。
然后拆机更换为一根4GB的内存,由于当初配置的时候设置的镜像是不自动启动的,所以必须手工启动它们。结果手贱在自动重新启动应用前面打了对勾并按下了应用,等了好久还是打不开青龙面板的页面,再一看青龙的容器居然被删除了!
瞬间一万个羊驼从脑海奔过,这特么的真是一个手贱导致要浪费大把时间的节奏啊,关键是距上一次从0开始设置已经过去几个月时间了,都已经忘记了具体操作步骤了。
没办法,这次就多花点时间精力记录一下,免得下次设置时候又要浪费时间搜索爬文。
青龙面板运行后设置完账户密码,登录到面板界面。
然后在ssh下面用root用户运行
docker ps
显示所有容器。
然后选择你需要进入的容器名称,比如我的青龙是f8ed1db3309c
运行docker exec -it f8ed1db3309c /bin/bash 进入青龙容器
然后运行ql check,坐等一两分钟,会输出下面一堆信息。
/ql/shell/check.sh: line 94: /root/.bashrc: No such file or directory
## 开始执行… 2022-10-18 08:23:34
=====> 开始检测
changed 1 package in 21s
Progress: resolved 1, reused 0, downloaded 0, added 0
Progress: resolved 3, reused 0, downloaded 3, added 0
Progress: resolved 4, reused 0, downloaded 3, added 0
Progress: resolved 35, reused 0, downloaded 17, added 0
Progress: resolved 54, reused 0, downloaded 34, added 0
Progress: resolved 83, reused 2, downloaded 57, added 0
WARN deprecated uuid@3.4.0: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.
Progress: resolved 127, reused 7, downloaded 95, added 0
Progress: resolved 174, reused 14, downloaded 133, added 0
Packages: +3 -2
+++–
Progress: resolved 198, reused 17, downloaded 179, added 3, done
/root/.local/share/pnpm/global/5:
– pm2 5.2.0
+ pm2 5.2.2
WARN Issues with peer dependencies found
.
└─┬ ts-node 10.9.1
└── ✕ missing peer @types/node@”*”
Peer dependencies that should be installed:
@types/node@”*”
Done in 10.9s
—> 1. 复制通知文件
—> 复制一份 /ql/sample/notify.py 为 /ql/data/scripts/notify.py
‘/ql/sample/notify.py’ -> ‘/ql/data/scripts/notify.py’
—> 复制一份 /ql/sample/notify.js 为 /ql/data/scripts/sendNotify.js
‘/ql/sample/notify.js’ -> ‘/ql/data/scripts/sendNotify.js’
—> 通知文件复制完成
—> 2. 复制nginx配置文件
‘/ql/docker/nginx.conf’ -> ‘/etc/nginx/nginx.conf’
‘/ql/docker/front.conf’ -> ‘/etc/nginx/conf.d/front.conf’
—> 配置文件复制完成
=====> 检测面板
<!DOCTYPE html>
<html>
<head>
<meta charset=”utf-8″ />
<meta
name=”viewport”
content=”width=device-width, initial-scale=1, maximum-scale=1, minimum-scale=1, user-scalable=no”
/>
<link rel=”shortcut icon” type=”image/x-icon” href=”/images/favicon.svg” />
<link rel=”stylesheet” href=”/umi.1a1e4a3d.css” />
<script>
window.routerBase = “/”;
</script>
<script>
//! umi version: 3.5.34
</script>
</head>
<body>
<div id=”root”></div>
<script src=”https://gw.alipayobjects.com/os/lib/react/16.13.1/umd/react.production.min.js”></script>
<script src=”https://gw.alipayobjects.com/os/lib/react-dom/16.13.1/umd/react-dom.production.min.js”></script>
<script src=”/umi.171fdc21.js”></script>
</body>
</html>
=====> 面板服务启动正常
=====> 检测nginx服务
94 root 0:00 nginx: master process nginx -c /etc/nginx/nginx.conf
95 root 0:00 nginx: worker process
96 root 0:00 nginx: worker process
=====> nginx服务正常
—> pm2日志
2022-10-18T07:26:42: info: ✌️ DB loaded
2022-10-18T07:26:42: info: ✌️ Init file down
2022-10-18T07:26:43: info: ✌️ Sentry loaded
2022-10-18T07:26:45: info: ✌️ Dependency Injector loaded
2022-10-18T07:26:45: info: ✌️ Express loaded
2022-10-18T07:26:45: info: ✌️ init data loaded
2022-10-18T07:26:45: info: ✌️ link deps loaded
2022-10-18T07:26:45: info: ✌️ init task loaded
2022-10-18T07:26:46: info: ✌️ Sock loaded
2022-10-18T07:26:46: info: [创建interval任务],任务ID: token,任务名: 生成token,执行命令: node /ql/static/build/token.js
2022-10-18T07:26:46: debug: ✌️ Back server launched on port 5600
2022-10-18T07:26:47: info: 任务 node /ql/static/build/token.js 进程id: 228 退出,退出码 0
2022-10-18T08:16:49: info: [取消定时任务],任务名:KR
2022-10-18T08:16:49: info: [创建cron任务],任务ID: 1,cron: 0 1 * * *,任务名: KR,执行命令: ql repo “https://github.com/KingRan/KR.git” “” “” “” “” “”
2022-10-18T08:17:12: error: 执行任务 ql repo “https://github.com/KingRan/KR.git” “” “” “” “” “” 失败,时间:2022-10-18 8:17:12, 错误信息:”Cloning into ‘/ql/data/repo/KingRan_KR’…\n”
2022-10-18T08:18:59: info: 任务 task KingRan_KR/h5st.ts 进程id: 10679 退出,退出码 0
2022-10-18T08:19:17: info: 任务 task KingRan_KR/jd_bean_change_pro.js 进程id: 12067 退出,退出码 0
2022-10-18T08:23:15: info: 任务 ql repo “https://github.com/KingRan/KR.git” “” “” “” “” “” 进程id: 239 退出,退出码 0
=====> 检测后台
{“code”:401,”message”:”jwt malformed”}
=====> 后台服务启动正常
启动面板服务
启动定时任务服务
启动公开服务
=====> 检测结束
## 执行结束… 2022-10-18 08:24:20 耗时 46 秒
运行之后应该就解决了脚本的依赖关系的问题。不用手工一个个去设置了。
再然后去青龙面板添加名称为JD_COOKIE的变量,填入你的cookie即可,至于如何获取京东cookie,大家自行搜索爬文,这里就不赘述了。
再接下来去青龙面板的订阅管理里面按下图添加KR库然后运行一下即可,各个脚本就会被添加到定时任务里面。
对了,在运行订阅管理脚本之前去青龙面板的配置文件里,改一下下面这个参数即可,加上ts。
## ql repo命令拉取脚本时需要拉取的文件后缀,直接写文件后缀名即可
RepoFileExtensions=”js py ts”
搞定。
20221021 update
还是需要在青龙面板里面手工安装依赖的
NodeJs 依赖库
crypto-js
prettytable
dotenv
jsdom
date-fns
tough-cookie
tslib
ws@7.4.3
ts-md5
jsdom -g
jieba
fs
form-data
json5
global-agent
png-js
@types/node
require
typescript
js-base64
axios
moment
got
md5
request
download
tunnel
ws
qrcode-terminal
Python3 依赖库
requests
canvas
ping3
jieba
aiohttp
Linux 依赖库安装失败,不清楚是什么情况。
bizCode
bizMsg
lxml
青龙面板配置备份
openwrt:
root@OpenWrt:~# find / -name env.sh
/opt/overlay2/d9479246f2de51b54782fd2275cb4d8dedc571cfe4fe714d91a16fb5c98a9a0e/merged/ql/data/config/env.sh
/opt/overlay2/d9479246f2de51b54782fd2275cb4d8dedc571cfe4fe714d91a16fb5c98a9a0e/diff/ql/data/config/env.sh
/opt/overlay2/4edf900d43ecb765c9e85a4c0c849697cb5053e3ff40708754d5d372e7cbcb83/diff/ql/data/config/env.sh
铁威马:
]# find / -name env.sh
/mnt/md0/appdata/docker/overlay2/63c768eae6e9faf7564128ea62dfdda9c0dd3c4e2fe89dd7b932812716e8efd7/merged/ql/data/config/env.sh
/mnt/md0/appdata/docker/overlay2/63c768eae6e9faf7564128ea62dfdda9c0dd3c4e2fe89dd7b932812716e8efd7/diff/ql/data/config/env.sh
复制到新的面板下面对应文件夹覆盖即可
经测试不行,并不会显示,而且重启青龙会被清空。
Openwrt编译生成ext4时提示磁盘空间不够的解决办法 error: ext4_allocate_best_fit_partial: failed to allocate 2496 blocks, out of space?
解决:
make menuconfig
菜单项”Target Images”,”Root filesystem partition size (in MB)”
改成其它的大值,默认是160,直接改成512
I5-8250U在hyper-v下面使用unixbench测试性能
BYTE UNIX Benchmarks (Version 5.1.3)
System: gcc-x64: GNU/Linux
OS: GNU/Linux — 5.4.0-122-generic — #138-Ubuntu SMP Wed Jun 22 15:00:31 UTC 2022
Machine: x86_64 (x86_64)
Language: en_US.utf8 (charmap=”UTF-8″, collate=”UTF-8″)
CPU 0: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (3600.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
CPU 1: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (3600.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
CPU 2: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (3600.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
CPU 3: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (3600.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
CPU 4: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (3600.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
CPU 5: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (3600.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
CPU 6: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (3600.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
CPU 7: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (3600.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
06:46:06 up 7:49, 2 users, load average: 0.00, 0.00, 0.00; runlevel 5
————————————————————————
Benchmark Run: Sat Jul 23 2022 06:46:06 – 07:14:12
8 CPUs in system; running 1 parallel copy of tests
Dhrystone 2 using register variables 42608524.8 lps (10.0 s, 7 samples)
Double-Precision Whetstone 6489.4 MWIPS (9.9 s, 7 samples)
Execl Throughput 4129.9 lps (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 635787.5 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 169590.5 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 1985937.5 KBps (30.0 s, 2 samples)
Pipe Throughput 865156.9 lps (10.0 s, 7 samples)
Pipe-based Context Switching 20744.7 lps (10.0 s, 7 samples)
Process Creation 7347.1 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 10401.9 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 2980.9 lpm (60.0 s, 2 samples)
System Call Overhead 537140.1 lps (10.0 s, 7 samples)
System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 42608524.8 3651.1
Double-Precision Whetstone 55.0 6489.4 1179.9
Execl Throughput 43.0 4129.9 960.5
File Copy 1024 bufsize 2000 maxblocks 3960.0 635787.5 1605.5
File Copy 256 bufsize 500 maxblocks 1655.0 169590.5 1024.7
File Copy 4096 bufsize 8000 maxblocks 5800.0 1985937.5 3424.0
Pipe Throughput 12440.0 865156.9 695.5
Pipe-based Context Switching 4000.0 20744.7 51.9
Process Creation 126.0 7347.1 583.1
Shell Scripts (1 concurrent) 42.4 10401.9 2453.3
Shell Scripts (8 concurrent) 6.0 2980.9 4968.2
System Call Overhead 15000.0 537140.1 358.1
========
System Benchmarks Index Score 1065.4
————————————————————————
Benchmark Run: Sat Jul 23 2022 07:14:12 – 07:42:44
8 CPUs in system; running 8 parallel copies of tests
Dhrystone 2 using register variables 153654544.7 lps (10.0 s, 7 samples)
Double-Precision Whetstone 34421.6 MWIPS (11.5 s, 7 samples)
Execl Throughput 13329.0 lps (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 712096.5 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 180312.6 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 2250174.5 KBps (30.0 s, 2 samples)
Pipe Throughput 3111422.7 lps (10.0 s, 7 samples)
Pipe-based Context Switching 601761.1 lps (10.0 s, 7 samples)
Process Creation 22973.2 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 23595.3 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 3313.3 lpm (60.0 s, 2 samples)
System Call Overhead 1990193.6 lps (10.0 s, 7 samples)
System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 153654544.7 13166.6
Double-Precision Whetstone 55.0 34421.6 6258.5
Execl Throughput 43.0 13329.0 3099.8
File Copy 1024 bufsize 2000 maxblocks 3960.0 712096.5 1798.2
File Copy 256 bufsize 500 maxblocks 1655.0 180312.6 1089.5
File Copy 4096 bufsize 8000 maxblocks 5800.0 2250174.5 3879.6
Pipe Throughput 12440.0 3111422.7 2501.1
Pipe-based Context Switching 4000.0 601761.1 1504.4
Process Creation 126.0 22973.2 1823.3
Shell Scripts (1 concurrent) 42.4 23595.3 5564.9
Shell Scripts (8 concurrent) 6.0 3313.3 5522.1
System Call Overhead 15000.0 1990193.6 1326.8
========
System Benchmarks Index Score 3005.4
测试的时候,由于笔记本散热原因,CPU基本上长时间在2.1Ghz运行。
下面贴一个J1900CPU,宿主机运行openwrt,在docker里运行ubuntu使用unixbench测试的得分。
BYTE UNIX Benchmarks (Version 5.1.3)
System: OpenWrt: GNU/Linux
OS: GNU/Linux — 5.15.35 — #0 SMP Mon Apr 25 11:18:22 2022
Machine: x86_64 (x86_64)
Language: en_US.utf8 (charmap=”ANSI_X3.4-1968″, collate=”ANSI_X3.4-1968″)
CPU 0: Intel(R) Celeron(R) CPU J1900 @ 1.99GHz (4000.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization
CPU 1: Intel(R) Celeron(R) CPU J1900 @ 1.99GHz (4000.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization
CPU 2: Intel(R) Celeron(R) CPU J1900 @ 1.99GHz (4000.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization
CPU 3: Intel(R) Celeron(R) CPU J1900 @ 1.99GHz (4000.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization
15:47:42 up 7 days, 12:59, 0 users, load average: 0.74, 0.33, 0.14; runlevel
————————————————————————
Benchmark Run: Fri Jul 22 2022 15:47:42 – 16:15:46
4 CPUs in system; running 1 parallel copy of tests
Dhrystone 2 using register variables 11870208.3 lps (10.0 s, 7 samples)
Double-Precision Whetstone 2427.7 MWIPS (10.2 s, 7 samples)
Execl Throughput 1596.9 lps (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 242133.1 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 66194.7 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 637394.7 KBps (30.0 s, 2 samples)
Pipe Throughput 466222.1 lps (10.0 s, 7 samples)
Pipe-based Context Switching 41094.0 lps (10.0 s, 7 samples)
Process Creation 2470.1 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 3385.2 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 1181.3 lpm (60.0 s, 2 samples)
System Call Overhead 376589.3 lps (10.0 s, 7 samples)
System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 11870208.3 1017.2
Double-Precision Whetstone 55.0 2427.7 441.4
Execl Throughput 43.0 1596.9 371.4
File Copy 1024 bufsize 2000 maxblocks 3960.0 242133.1 611.4
File Copy 256 bufsize 500 maxblocks 1655.0 66194.7 400.0
File Copy 4096 bufsize 8000 maxblocks 5800.0 637394.7 1099.0
Pipe Throughput 12440.0 466222.1 374.8
Pipe-based Context Switching 4000.0 41094.0 102.7
Process Creation 126.0 2470.1 196.0
Shell Scripts (1 concurrent) 42.4 3385.2 798.4
Shell Scripts (8 concurrent) 6.0 1181.3 1968.8
System Call Overhead 15000.0 376589.3 251.1
========
System Benchmarks Index Score 475.5
————————————————————————
Benchmark Run: Fri Jul 22 2022 16:15:46 – 16:43:51
4 CPUs in system; running 4 parallel copies of tests
Dhrystone 2 using register variables 46351886.4 lps (10.0 s, 7 samples)
Double-Precision Whetstone 9709.4 MWIPS (10.2 s, 7 samples)
Execl Throughput 4547.2 lps (29.8 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 422497.9 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 123753.2 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 1023713.4 KBps (30.0 s, 2 samples)
Pipe Throughput 1829630.5 lps (10.0 s, 7 samples)
Pipe-based Context Switching 269682.6 lps (10.0 s, 7 samples)
Process Creation 9473.4 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 10449.0 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 1413.0 lpm (60.1 s, 2 samples)
System Call Overhead 1377797.4 lps (10.0 s, 7 samples)
System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 46351886.4 3971.9
Double-Precision Whetstone 55.0 9709.4 1765.3
Execl Throughput 43.0 4547.2 1057.5
File Copy 1024 bufsize 2000 maxblocks 3960.0 422497.9 1066.9
File Copy 256 bufsize 500 maxblocks 1655.0 123753.2 747.8
File Copy 4096 bufsize 8000 maxblocks 5800.0 1023713.4 1765.0
Pipe Throughput 12440.0 1829630.5 1470.8
Pipe-based Context Switching 4000.0 269682.6 674.2
Process Creation 126.0 9473.4 751.9
Shell Scripts (1 concurrent) 42.4 10449.0 2464.4
Shell Scripts (8 concurrent) 6.0 1413.0 2355.0
System Call Overhead 15000.0 1377797.4 918.5
========
System Benchmarks Index Score 1364.1
可以看出,J1900的CPU还是很强悍的。I5-8250U理论上相当于5倍的J1900性能,但是由于在hyper-v虚拟机下面测试,性能折损较大。
关于不同CPU性能测试,可以参考这篇文章。
http://www.wifizoo.net/archives/2017
openwrt运行青龙面板不联网的解决办法
openwrt下创建基于bridge模式的容器,默认是无法访问公网的,只能lan访问。
解决办法就是创建host网络而非bridge。
host模式下容器是直接联网的,无需端口映射,也无需debug。
直接在ssh下运行
docker run -dit \
-v $pwd/ql/config:/ql/config \
-v $pwd/ql/log:/ql/log \
-v $pwd/ql/db:/ql/db \
-v $pwd/ql/scripts:/ql/scripts \
-v $pwd/ql/jbot:/ql/jbot \
-e ENABLE_HANGUP=true \
-e ENABLE_WEB_PANEL=true \
–name qinglong \
–hostname qinglong \
–restart always \
–net host \
whyour/qinglong:latest
使用bridge模式:
直接在ssh下运行
docker run -dit \
-v $pwd/ql/config:/ql/config \
-v $pwd/ql/log:/ql/log \
-v $pwd/ql/db:/ql/db \
-v $pwd/ql/scripts:/ql/scripts \
-v $pwd/ql/jbot:/ql/jbot \
-p 5701:5700 \
-e ENABLE_HANGUP=true \
-e ENABLE_WEB_PANEL=true \
–name qinglong1 \
–hostname qinglong1 \
–restart always \
whyour/qinglong:latest
openwrt下面使用Docker容器默认bridge模式无法访问外部网络的解决办法
http://www.wifizoo.net/archives/2291
腾达AC15改5G的飞线方法
硬改5G 其实比较简单,总结下来就是拆三个电阻+飞三根线
1、 删除R380 (意思是让R380开路就可以,也可直接拿掉R380 这个电阻),然后R381左端飞一根线到R380的下端;
2、 删除R384(意思是让R384开路就可以,也可直接拿掉R384 这个电阻),然后R385下端飞一根线到R384下端(或U33上端);
3、 删除R387(意思是让R387开路就可以,也可直接拿掉R387 这个电阻),然后R388右端飞一根线到R387上端(或U34左上端)
lean版固件NAT 回环失效问题
在/etc/sysctl.conf 和/etc/sysctl.d/11-br-netfilter.conf 修改
net.bridge.bridge-nf-call-arptables=0
net.bridge.bridge-nf-call-ip6tables=0
net.bridge.bridge-nf-call-iptables=0
然后root@OpenWrt:~# sysctl -p看下是否修改成功
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
最后/etc/init.d/sysctl restart 搞定
如果和docker联网冲突,参考这里的解决办法
nat回环失效。简单说就是设备在内网(该路由器下面)无法通过路由器公网IP+端口(比如DDNS)访问内网映射出去的另一个设备,比如监控摄像机.
20220723 更新
实测,net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
的情况下,只要开启Luci>网络>防火墙>转发:接受 即可使bridge模式的容器联网