duangsuse::Echo
但是为了我们这个计划,必须要能够从某 tag 的 class 里面提取出它使用的语言 literateCodeFilter: has.cssClass("language-kotlin"), 这个改动是不向前兼容的,不得不再发布一个大版本
其实也可以谈一个这里不必解决的问题, #JavaScript
唉现在居然还为这种问题困扰,虽然 chain fold 这个东西也不一定必须用于『编制程序』,也可以用于逻辑推导……
foldr f v [x] = f x v
foldr f v (x:xs) = f x (foldr v xs)
foldr1 f [x] = x
foldr1 f xs = foldr f (head xs) (tail xs)
唉现在居然还为这种问题困扰,虽然 chain fold 这个东西也不一定必须用于『编制程序』,也可以用于逻辑推导……
function joinBy<T>(join: (p:Predicate<T>, q:Predicate<T>) => Predicate<T>, ...ps: Predicate<T>): Predicate<T>
然后我们可以实现它{
return ps.reduce((a,b) => join(a,b) );
}
比如有许多 has.CSSClass 什么的,joinBy(or, has.CSSClass("language-java"), has.CSSClass("language-kotlin"))
或者joinBy(or, ...["language-java", "language-kotlin"].map(has.CSSClass))当然也可以使用不常见的名字 foldr 什么的
function foldr(f, v, ...xs) {
if (xs.length == 1) return f(xs[0], v)
else return f(xs[0], foldl1(f, v, xs.slice(1, xs.length)));
} foldr f v [x] = f x v
foldr f v (x:xs) = f x (foldr v xs)
foldr1 f [x] = x
foldr1 f xs = foldr f (head xs) (tail xs)
duangsuse::Echo
但是为了我们这个计划,必须要能够从某 tag 的 class 里面提取出它使用的语言 literateCodeFilter: has.cssClass("language-kotlin"), 这个改动是不向前兼容的,不得不再发布一个大版本
所以我们在能够区分 case 制造 Element Tree 之前,还需要一个
其中 PostProcessFn 可以帮助一些与 #Kotlin Playground 对元素树的结构要求区别很大的 playground 完成兼容,比如修改 hidden text area 的 class 什么的。 #project
Map<string, [PlaygroundDefaults, PlaygroundFnGlobalId, PostprocessFn]> 其中 PostProcessFn 可以帮助一些与 #Kotlin Playground 对元素树的结构要求区别很大的 playground 完成兼容,比如修改 hidden text area 的 class 什么的。 #project
type PlaygroundDefaults = Object我们的辅助模块依赖是可以用
type PlaygroundFnGlobalId = string
type PostprocessFn = Consumer<Element>;
type Consumer<T> = (item:T) => any;
require() RequireJs 来完成的#TypeScript TS 也有『半dependent type』的特性——输出类型可按照实际参数而定。
interface Monkey {
Hou: HappyMonkey;
Jun: SadMonkey;
}
function getMonkey<K extends keyof Monkey>(name: K): Monkey[K] {/**/}
duangsuse::Echo
现在对示例程序的编写方式已经比较成熟了,都是直接在一个 literate 写一个模块的某层次的基本构件,再在里面写 depend 它们的示例和辅助示例的算法。
在更新自家 Fedora OS installation 的时候顺便说一下。
之前的模型是一个 Literate 对 N 个 language-kotlin 的代码块,然后显示代码的时候拼起来创建 <code> tag 交给 KotlinPlayground 做后续工作。
现在可以一个 Literate 对 N 个靠“可配置”的
所以 show code 时也不能直接拼一起了,不得不把 filterCode 这个工作流程给取消,直接让 enableCodeFilter 程序靠 filterCodeTag 滤出内部 language-* Element 分语言拼合处理,加 button,其实也不太难,要用到我之前定义的 hist(ogram) 直方图来组织按语言的 <code> 创建,然后可以沿用之前的 <code> 创建工作流,这个过程是按 language 分的 for-each,目标是 appendChildElement。
我个人觉得,应该是给每个 literate 选择一个语言,也更加符合现在 LiterateKt 的设计原模型。
之前的模型是一个 Literate 对 N 个 language-kotlin 的代码块,然后显示代码的时候拼起来创建 <code> tag 交给 KotlinPlayground 做后续工作。
现在可以一个 Literate 对 N 个靠“可配置”的
Map<LanguageId, [ElementConfig, Consumer<Element>]> 来(之前那个 ElementConfg 是 DOM ElementAttribute Map、Consumer<Element> 是 PlaygroundGlobalId)所以 show code 时也不能直接拼一起了,不得不把 filterCode 这个工作流程给取消,直接让 enableCodeFilter 程序靠 filterCodeTag 滤出内部 language-* Element 分语言拼合处理,加 button,其实也不太难,要用到我之前定义的 hist(ogram) 直方图来组织按语言的 <code> 创建,然后可以沿用之前的 <code> 创建工作流,这个过程是按 language 分的 for-each,目标是 appendChildElement。
我个人觉得,应该是给每个 literate 选择一个语言,也更加符合现在 LiterateKt 的设计原模型。
type ElementConfig = Consumer<Element> // defined in dom.ts
type Consumer<T> = (T) => any // defined in util.ts#recommended bpython
dreampie
两个都是 Python interpreter 的包装
bpython version 0.18 on top of Python 2.7.17 /usr/bin/python2
dreampie
两个都是 Python interpreter 的包装
[DuangSUSE@duangsuse]~% sparkleshare
exception inside UnhandledException handler: The type initializer for 'Sparkles.Logger' threw an exception.
[ERROR] FATAL UNHANDLED EXCEPTION: System.TypeInitializationException: The type initializer for 'Sparkles.Logger' threw an exception. ---> System.InvalidOperationException: ValueFactory attempted to access the Value property of this instance.
at System.Lazy`1[T].CreateValue () [0x0003a] in <373b6e083d6e45e498c9082a8eebd27f>:0
--- End of inner exception stack trace ---
at SparkleShare.SparkleShare.OnUnhandledException (System.Object sender, System.UnhandledExceptionEventArgs exception_args) [0x0000c] in <330346ba25694cbab2d9ef3ec7a020a1>:0
- package v8-314-3.14.5.10-13.fc29.x86_64 requires libicui18n.so.62()(64bit), but none of the providers can be installed
- package v8-314-3.14.5.10-13.fc29.x86_64 requires libicuuc.so.62()(64bit), but none of the providers can be installed
- libicu-62.2-1.fc29.x86_64 does not belong to a distupgrade repository
- problem with installed package v8-314-3.14.5.10-13.fc29.x86_64
这居然不是 Google V8 JavaScript?是个 3D 库?
- package v8-314-3.14.5.10-13.fc29.x86_64 requires libicuuc.so.62()(64bit), but none of the providers can be installed
- libicu-62.2-1.fc29.x86_64 does not belong to a distupgrade repository
- problem with installed package v8-314-3.14.5.10-13.fc29.x86_64
dnf info v8-314 这居然不是 Google V8 JavaScript?是个 3D 库?
duangsuse::Echo
- package v8-314-3.14.5.10-13.fc29.x86_64 requires libicui18n.so.62()(64bit), but none of the providers can be installed - package v8-314-3.14.5.10-13.fc29.x86_64 requires libicuuc.so.62()(64bit), but none of the providers can be installed - libicu-62.2…
Summary : JavaScript Engine不对啊,这么说这只是一个老版本的
URL : https://developers.google.com/v8/
协议 : BSD
Description : V8 is Google's open source JavaScript engine. V8 is written in C++ and is used
: in Google Chrome, the open source browser from Google. V8 implements ECMAScript
: as specified in ECMA-262, 3rd edition. This is version 3.14, which is no longer
: maintained by Google, but was adopted by a lot of other software.
v8?我的 node 还能用
duangsuse::Echo
Summary : JavaScript Engine URL : https://developers.google.com/v8/ 协议 : BSD Description : V8 is Google's open source JavaScript engine. V8 is written in C++ and is used : in Google Chrome, the open source browser from Google.…
This is version 3.14, which is no longer maintained by Google, but was adopted by a lot of other software.
$ dnf info v8
🌚最新版是 6.7
$ dnf info v8
Name : v8
时期 : 1
Version : 6.7.17
发布 : 7.fc29 🌚最新版是 6.7
安装 183 软件包
升级 4689 软件包
移除 5 软件包
降级 49 软件包
总下载:6.9 G
DNF 会仅下载软件包、安装 gpg 密钥并检查事务。
升级 4689 软件包
移除 5 软件包
降级 49 软件包
总下载:6.9 G
DNF 会仅下载软件包、安装 gpg 密钥并检查事务。
https://github.com/bytemaster/fc_malloc #lowlevel #backend
才想起来 CAS 是 compare-and-swap 非阻塞并行同步 Atomic 操作的意思
才想起来 CAS 是 compare-and-swap 非阻塞并行同步 Atomic 操作的意思
GitHub
GitHub - bytemaster/fc_malloc: Super Fast, Lock-Free, Wait-Free, CAS-free, thread-safe, memory allocator.
Super Fast, Lock-Free, Wait-Free, CAS-free, thread-safe, memory allocator. - GitHub - bytemaster/fc_malloc: Super Fast, Lock-Free, Wait-Free, CAS-free, thread-safe, memory allocator.
duangsuse::Echo
https://github.com/bytemaster/fc_malloc #lowlevel #backend 才想起来 CAS 是 compare-and-swap 非阻塞并行同步 Atomic 操作的意思
The key to developing fast multi-threaded allocators is eliminating lock-contention and false sharing. Even simple atomic operations and spin-locks can destroy the performance of an allocation system. The real challenge is that the heap is a multi-producer, multi-consumer resource where all threads need to read and write the common memory pool.
With fc_malloc I borrowed design principles from the LMAX disruptor and assigned a dedicated thread for moving free blocks from all of the other threads to the shared pool. This makes all threads 'single producers' of free blocks and therefore it is possible to have a lock-free, wait-free per-thread free list. This also makes a single producer of 'free blocks' which means that blocks can be aquired with a single-producer, multiple consumer pattern.
When there is a need for more memory and existing free-lists are not sufficent, each thread maps its own range from the OS in 4 MB chunks. Allocating from this 'cache miss' is not much slower than allocating stack space and requires no contention. Requests for larger than 4MB are allocated direclty from the OS via mmap.
With fc_malloc I borrowed design principles from the LMAX disruptor and assigned a dedicated thread for moving free blocks from all of the other threads to the shared pool. This makes all threads 'single producers' of free blocks and therefore it is possible to have a lock-free, wait-free per-thread free list. This also makes a single producer of 'free blocks' which means that blocks can be aquired with a single-producer, multiple consumer pattern.
When there is a need for more memory and existing free-lists are not sufficent, each thread maps its own range from the OS in 4 MB chunks. Allocating from this 'cache miss' is not much slower than allocating stack space and requires no contention. Requests for larger than 4MB are allocated direclty from the OS via mmap.