fix some smallities.

This commit is contained in:
fiatjaf 2023-05-11 06:00:04 -03:00
parent bd6d325196
commit 44ea6d8458
No known key found for this signature in database
GPG Key ID: BAD43C4BE5C1A3A1
1 changed files with 18 additions and 22 deletions

40
93.md
View File

@ -8,9 +8,9 @@ NSON
### Preamble
Some [benchmarks](https://github.com/fiatjaf/nostr-json-benchmarks/tree/2f254fff91b3ad063ef9726bb4a3d25316cf12d8) made using all libraries available on Golang show that JSON decoding is very slow. And even when people do assembly-level optimizations things only improve up to a point (e.g. for decoding a Nostr event, the decoding time is 50% smaller).
Some [benchmarks](https://github.com/fiatjaf/nostr-json-benchmarks/tree/2f254fff91b3ad063ef9726bb4a3d25316cf12d8) made using all libraries available on Golang show that JSON decoding is very slow. And even when people do assembly-level optimizations things only improve up to a point (e.g. for decoding a Nostr event, the "Sonic" library uses about 50% of that of the standard library).
Meanwhile, doing a simple TLV encoding reduces the decoding time to 35% and a simpler static binary format for Nostr events reduces makes that number drop to 4%. However, it would be bad for Nostr if a binary encoding was introduced, as it would be likely to cause compatibility issues, centralize the protocol and/or increase the work for everybody, more about this at [this comment](https://github.com/nostr-protocol/nips/pull/512#issuecomment-1542368664).
Meanwhile, doing a simple TLV encoding reduces the decoding time to 35% and a simpler static binary format for Nostr events makes that number drop to 4%. However, it would be bad for Nostr if a binary encoding was introduced, as it would be likely to cause compatibility issues, centralize the protocol and/or increase the work for everybody, more about this in [this comment](https://github.com/nostr-protocol/nips/pull/512#issuecomment-1542368664).
### The actual NIP
@ -20,31 +20,27 @@ Here's an example of a NSON-encoded Nostr event:
`{"id":"57ff66490a6a2af3992accc26ae95f3f60c6e5f84ed0ddf6f59c534d3920d3d2","pubkey":"79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798","sig":"504d142aed7fa7e0f6dab5bcd7eed63963b0277a8e11bbcb03b94531beb4b95a12f1438668b02746bd5362161bc782068e6b71494060975414e793f9e19f57ea","created_at":1683762317,"nson":"2801000b0203000100400005040001004000000014","kind":1,"content":"hello world","tags":[["e","b6de44a9dd47d1c000f795ea0453046914f44ba7d5e369608b04867a575ea83e","reply"],["p","c26f7b252cea77a5b94f42b1a4771021be07d4df766407e47738605f7e3ab774","","wss://relay.damus.io"]]}`
The idea is that `"id"` comes first, so it can be accessed by reading a slice of the string from character `7` to character `71`, `pubkey` from character `83` to `147` and so on. `"content"`, `"kind"` and `"tags"` have dynamic sizes, so these are given by the values inside the `"nson"` field (which is also dynamic, its size by its first byte).
The idea is that `"id"` comes first, so it can be accessed by reading a slice of the string from character `7` to character `71`, `pubkey` from character `83` to `147` and so on. `"content"`, `"kind"` and `"tags"` have dynamic sizes, so their sizes are given by the values inside the `"nson"` field (which is also dynamic, its size given by its first byte).
### Anatomy of the `"nson"` field
It is hex-encoded. Some fields are a single byte, others are two bytes (4 characters).
It is hex-encoded. Some fields are a single byte, others are two bytes (4 characters), big-endian.
Each explanation starts at the same line as the field it is referring to.
number of tags (let's say it's two)
number of items on the first tag (let's say it's three)
number of chars on the first item
number of chars on the second item
number of chars on the third item
number of items on the second tag (let's say it's two)
number of chars on the first item
number of chars on the second item
tt: number of tags (let's say it's two)
nn: number of items on the first tag (let's say it's 3)
1111: number of chars on the first item
2222: number of chars on the second item
3333: number of chars on the third item
nn: number of items on the second tag (let's say it's 2)
1111: number of chars on the first item
2222: number of chars on the second item
"nson":"xxkkccccttnn111122223333nn11112222"
nson size
kind chars
content chars
xx: nson size
kk: kind chars
cccc: content chars
### Reference implementation
Beware, all Rust maniacs, the following reference implementation is written in Go:
```go
func decodeNson(data string) *Event {
evt := &Event{}
@ -167,13 +163,13 @@ func encodeNson(evt *Event) string {
Besides the field ordering and the presence of the `"nson"` field, other restrictions must be applied:
- the `"created_at"` field must have 10, characters, which gives us a range of dates from about 20 years ago up to 250 years in the future.
- to simplify decoding of `"content"` and `"tags"` strings, escape codes like `\uXXXX` are forbidden in NSON, UTF-8 must be used instead. Only `\n`, `\\` and `\"` are the only valid escaped sequences.
- to simplify decoding of `"content"` and `"tags"` strings, escape codes like `\uXXXX` are forbidden in NSON, UTF-8 must be used instead. `\n`, `\\` and `\"` are the only valid escaped sequences.
### Backwards-compatibility
Any reader who is not aware of the NSON-encoding can receive these events and decode them using whatever other JSON decoder they happen to have in hand. The `"nson"` field will just be ignored and life will continue as normal.
Any reader who is not aware of the NSON-encoding can receive these events and decode them using whatever means they want. The `"nson"` field will just be ignored and life will continue as normal.
Also, other event fields that may be present (for example, the NIP-03 `"ots"` field) can be added at the end, after `"tags"`, with no loss.
Also, other event fields that may be present (for example, the NIP-03 `"ots"` field) can be added at the end, after `"tags"`, with no loss to anyone.
### Other points worth mentioning