duangsuse::Echo
717 subscribers
4.26K photos
130 videos
583 files
6.48K links
import this:
美而不丑、明而不暗、短而不凡、长而不乱,扁平不宽,读而后码,行之天下,勿托地上天国。
异常勿吞,难过勿过,叹一真理。效率是很重要,盲目最是低效。
简明是可靠的先验,不是可靠的祭品。
知其变,守其恒,为天下式;穷其变,知不穷,得地上势。知变守恒却穷变知新,我认真理,我不认真。

技术相干订阅~
另外有 throws 闲杂频道 @dsuset
转载频道 @dsusep
极小可能会有批评zf的消息 如有不适可退出
suse小站(面向运气编程): https://WOJS.org/#/
Download Telegram
https://github.com/kkroening/ffmpeg-python/issues/246
n,height,width,channels = images.shape
process = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height))
.output(fn, pix_fmt='yuv420p', vcodec=vcodec, r=framerate)
.overwrite_output()
.run_async(pipe_stdin=True)
)
for frame in images:
process.stdin.write(
frame
.astype(np.uint8)
.tobytes()
)
process.stdin.close()
process.wait()
duangsuse::Echo
https://github.com/krohak/Embedded-Subtitles-OCR/blob/master/Embedded-Subtitles-OCR.ipynb 学习了一些增强的 filtering 技术 🤔 Binarize (High Pass filer) img = np.array(img) img = img[:,:,0] img = img > 170 Bandpass Filter lower = np.array([140, 140, 140]) upper = np.array([199…
./extract_subtitles.py -crop '(313,951)(1343,45)' --crop-debug -filter-code '~cv2.inRange(it, np.array([0xbf,0xbf,0xbf]), np.array([0xff, 0xff, 0xff]))' SomethingNew.mp4 🤔对黑底白字效果很好,出来的文字不需要再编辑了
看来预处理依然是必要的
106-159 "How can something be so nice?"
178-250 "Yet so shocking, yet so nice."
286-321 "How can something be so new?"
367-387 "Yet so known but yet so new?"
414-429 "Ohh, one two, three, the time has gone."
467-514 "Problematic things undone."
537-595 "How can something be so cruel? "
610-683 "Yet so warm but yet so cruel?"
755-945 ""
967-989 "Try to find from a different sight,"
1039-1095 "Get to choose where to look back,"
1120-1235 "Resolve what others can't do for now..."
1280-1366 "What a beautiful way to fix all wrong things."
1383-1439 "Evolve future generations of work teams:"
1509-1612 "But tell me why's that fear of yours,"
1634-1714 "‘Cause you know it is going to work... (keep ittup) ="
1739-1817 "Let's start from now, stay tuned, you're half way through!"
1834-1903 "Break it, crash it, take it farther"
1978-1978 "Making something even smarter."
2047-2047 "Do it, start it, make it happen,\n\nbe"
2065-2079 "Stab in statements screaming louder. "
2119-2160 "How can something be so nice?"
2188-2239 "Yet so shocking, yet so nice."
2261-2283 "How can something be so new? "
2334-2399 "Yet so known but yet..."
2430-2654 "IT'S SOMETHING NEW"
整段就只进行了一次字符替换操作,就直接可以用了。 🤔
test.srt
1.7 KB
前半部分效果还可以,不过说实话本来不是应该“可以”,而是“没问题”才对。
这里有两张图,都是 soft 帧间差矩阵的拼合,下面一张是预先过滤色域了的。可以注意到……几乎一模一样
duangsuse::Echo
./extract_subtitles.py -crop '(313,951)(1343,45)' --crop-debug -filter-code '~cv2.inRange(it, np.array([0xbf,0xbf,0xbf]), np.array([0xff, 0xff, 0xff]))' SomethingNew.mp4 🤔对黑底白字效果很好,出来的文字不需要再编辑了 看来预处理依然是必要的
Bandpass filter 效果的确非常好
./extract_subtitles.py -crop '(617,955)(685,32)' --crop-debug -filter-code '~cv2.inRange(it, (0xBF,0xBF,0xBF), (0xFF,0xFF,0xFF))' SomethingNew.mp4
duangsuse::Echo
#PL #Web #dev https://www.mint-lang.com/
关于这个 #PL Mint ,我转发一个他台评论
这是新算法用 blur+grayscale 的,总体来看还好
This media is not supported in your browser
VIEW IN TELEGRAM
新算法利用 peakutils.baseline 和 peakutil.indexes,总体来看很不行,总是漏掉关键帧(漏得非常严重,更别说精确判定歌词显现/消失了),还不如原 smooth+argrelextrma 算法

新算法在 diff 确定上是 grayscale + subtract + countNonZero ,统计上是 smooth+argrelextrama+subtract(it, 1)
老算法是所有通道的 absdiff sum ,统计上是 peakutils.baseline 和 peakutils.indexes

in solveFrameDiffernces:
postprocess = lambda mat: grayscale(self.postprocessUMat(self.cropUMat(mat, crop)))


diff = cv2.subtract(curr_frame, prev_frame)
#vs. diff = cv2.absdiff(curr_frame, prev_frame)

yield Frame(index, curr_frame, cv2.countNonZero(diff) )


in solveValidFrames:
diffs = array(frame_diffs)
base = peak.baseline(diffs, 2)
indices = peak.indexes(diffs-base, 0.9, min_dist=1)
return (base, map(frames.__getitem__, indices))
#tech #dev #statement 🤔 关于这个 ParserKt 有没有创新嘛……
其实作为一个整体来看,ParserKt 的确是创新的(但换到每一个具体的技术来看,其实也没什么新东西),当然对于所有同类库来讲,它们也是有自己的“私货”。
比如 ParserKt 的 read/show,其实就是 unparsing,再比如 ParserKt 的 one-pass,其实所有 pure 的 parser combinator 都是 one-pass 的(而且不得不说,在类似字符串组合的情况下这类 parserc 的定义会更好看)
ParserKt 作为一个解析组合子,本身在算法上也没什么亮点(因为许多解析组合子的亮点都不在算法,而在于它们的便利性,少许利用 codegen/macro/inline 的解析组合子才更倾向算法)
包括现在的可变 Pattern 以及 LayoutPattern/InfixPattern,其实也都是运用不那么广泛的组合而已,本质上没太大更新。

想想看 Kotlin 其实也是这样的,multiplatform 早就不是什么新概念了(Haskell 和 Scala 都有官方 JS 后端,C 都有 WebAssembly)
只能说是 Kotlin 加了许多私货,不是什么新东西,只不过其它老东西都没它那么好用而已……
#PL https://github.com/samshadwell/TrumpScript#mission
The is and are keywords are used both to check for equality, and for assignment. To use for assignment, say something like Trump is great or Democrats are dumb. To use to check for equality, do the same but append a ?. For example, you may need to ask yourself Trump is "the best"? (although we all know that would evaluate to fact anyway)
每次我看到这幅图(没经过统计的 pixel absdiff channels sum)就觉得来气,看起来这么简单的事情居然不能精确做到…… 🤔
Forwarded from dnaugsuz
本群有没有大佬知道 OpenCV 怎么做 keyframe extraction 🌚
Forwarded from dnaugsuz
This media is not supported in your browser
VIEW IN TELEGRAM