You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

91 lines
2.7 KiB

new events package query parser use parser compiler to generate query parser I used https://github.com/pointlander/peg which has a nice API and seems to be the most popular Golang compiler parser using PEG on Github. More about PEG: - https://en.wikipedia.org/wiki/Parsing_expression_grammar - https://github.com/PhilippeSigaud/Pegged/wiki/PEG-Basics - https://github.com/PhilippeSigaud/Pegged/wiki/Grammar-Examples rename implement query match function match function uncomment test lines add more test cases for query#Matches fix int case rename events to pubsub add comment about cache assertReceive helper to not block on receive in tests fix bug with multiple conditions uncomment benchmark first results: ``` Benchmark10Clients-2 1000 1305493 ns/op 3957519 B/op 355 allocs/op Benchmark100Clients-2 100 12278304 ns/op 39571751 B/op 3505 allocs/op Benchmark1000Clients-2 10 124120909 ns/op 395714004 B/op 35005 allocs/op ``` 124ms to publish message to 1000 clients. A lot. use AST from query.peg.go separate pubsub and query packages by using Query interface in pubsub wrote docs and refactor code updates from Frey's review refactor type assertion to use type switch cleanup during shutdown subscriber should create output channel, not the server overflow strategies, server buffer capacity context as the first argument for Publish log error introduce Option type update NewServer comment move helpers into pubsub_test increase assertReceive timeout add query.MustParse add more false tests for parser add more false tests for query.Matches parse numbers as int64 / float64 try our best to convert from other types add number to panic output add more comments save commit introduce client argument as first argument to Subscribe > Why we do not specify buffer size on the output channel in Subscribe? The choice of buffer size of N here depends on knowing the number of messages server will receive and the number of messages downstream subscribers will consume. This is fragile: if we publish an additional message, or if one of the downstream subscribers reads any fewer messages, we will again have blocked goroutines. save commit remove reference counting fix test test client resubscribe test UnsubscribeAll client options [pubsub/query] fuzzy testing do not print msg as it creates data race!
8 years ago
new events package query parser use parser compiler to generate query parser I used https://github.com/pointlander/peg which has a nice API and seems to be the most popular Golang compiler parser using PEG on Github. More about PEG: - https://en.wikipedia.org/wiki/Parsing_expression_grammar - https://github.com/PhilippeSigaud/Pegged/wiki/PEG-Basics - https://github.com/PhilippeSigaud/Pegged/wiki/Grammar-Examples rename implement query match function match function uncomment test lines add more test cases for query#Matches fix int case rename events to pubsub add comment about cache assertReceive helper to not block on receive in tests fix bug with multiple conditions uncomment benchmark first results: ``` Benchmark10Clients-2 1000 1305493 ns/op 3957519 B/op 355 allocs/op Benchmark100Clients-2 100 12278304 ns/op 39571751 B/op 3505 allocs/op Benchmark1000Clients-2 10 124120909 ns/op 395714004 B/op 35005 allocs/op ``` 124ms to publish message to 1000 clients. A lot. use AST from query.peg.go separate pubsub and query packages by using Query interface in pubsub wrote docs and refactor code updates from Frey's review refactor type assertion to use type switch cleanup during shutdown subscriber should create output channel, not the server overflow strategies, server buffer capacity context as the first argument for Publish log error introduce Option type update NewServer comment move helpers into pubsub_test increase assertReceive timeout add query.MustParse add more false tests for parser add more false tests for query.Matches parse numbers as int64 / float64 try our best to convert from other types add number to panic output add more comments save commit introduce client argument as first argument to Subscribe > Why we do not specify buffer size on the output channel in Subscribe? The choice of buffer size of N here depends on knowing the number of messages server will receive and the number of messages downstream subscribers will consume. This is fragile: if we publish an additional message, or if one of the downstream subscribers reads any fewer messages, we will again have blocked goroutines. save commit remove reference counting fix test test client resubscribe test UnsubscribeAll client options [pubsub/query] fuzzy testing do not print msg as it creates data race!
8 years ago
new events package query parser use parser compiler to generate query parser I used https://github.com/pointlander/peg which has a nice API and seems to be the most popular Golang compiler parser using PEG on Github. More about PEG: - https://en.wikipedia.org/wiki/Parsing_expression_grammar - https://github.com/PhilippeSigaud/Pegged/wiki/PEG-Basics - https://github.com/PhilippeSigaud/Pegged/wiki/Grammar-Examples rename implement query match function match function uncomment test lines add more test cases for query#Matches fix int case rename events to pubsub add comment about cache assertReceive helper to not block on receive in tests fix bug with multiple conditions uncomment benchmark first results: ``` Benchmark10Clients-2 1000 1305493 ns/op 3957519 B/op 355 allocs/op Benchmark100Clients-2 100 12278304 ns/op 39571751 B/op 3505 allocs/op Benchmark1000Clients-2 10 124120909 ns/op 395714004 B/op 35005 allocs/op ``` 124ms to publish message to 1000 clients. A lot. use AST from query.peg.go separate pubsub and query packages by using Query interface in pubsub wrote docs and refactor code updates from Frey's review refactor type assertion to use type switch cleanup during shutdown subscriber should create output channel, not the server overflow strategies, server buffer capacity context as the first argument for Publish log error introduce Option type update NewServer comment move helpers into pubsub_test increase assertReceive timeout add query.MustParse add more false tests for parser add more false tests for query.Matches parse numbers as int64 / float64 try our best to convert from other types add number to panic output add more comments save commit introduce client argument as first argument to Subscribe > Why we do not specify buffer size on the output channel in Subscribe? The choice of buffer size of N here depends on knowing the number of messages server will receive and the number of messages downstream subscribers will consume. This is fragile: if we publish an additional message, or if one of the downstream subscribers reads any fewer messages, we will again have blocked goroutines. save commit remove reference counting fix test test client resubscribe test UnsubscribeAll client options [pubsub/query] fuzzy testing do not print msg as it creates data race!
8 years ago
new events package query parser use parser compiler to generate query parser I used https://github.com/pointlander/peg which has a nice API and seems to be the most popular Golang compiler parser using PEG on Github. More about PEG: - https://en.wikipedia.org/wiki/Parsing_expression_grammar - https://github.com/PhilippeSigaud/Pegged/wiki/PEG-Basics - https://github.com/PhilippeSigaud/Pegged/wiki/Grammar-Examples rename implement query match function match function uncomment test lines add more test cases for query#Matches fix int case rename events to pubsub add comment about cache assertReceive helper to not block on receive in tests fix bug with multiple conditions uncomment benchmark first results: ``` Benchmark10Clients-2 1000 1305493 ns/op 3957519 B/op 355 allocs/op Benchmark100Clients-2 100 12278304 ns/op 39571751 B/op 3505 allocs/op Benchmark1000Clients-2 10 124120909 ns/op 395714004 B/op 35005 allocs/op ``` 124ms to publish message to 1000 clients. A lot. use AST from query.peg.go separate pubsub and query packages by using Query interface in pubsub wrote docs and refactor code updates from Frey's review refactor type assertion to use type switch cleanup during shutdown subscriber should create output channel, not the server overflow strategies, server buffer capacity context as the first argument for Publish log error introduce Option type update NewServer comment move helpers into pubsub_test increase assertReceive timeout add query.MustParse add more false tests for parser add more false tests for query.Matches parse numbers as int64 / float64 try our best to convert from other types add number to panic output add more comments save commit introduce client argument as first argument to Subscribe > Why we do not specify buffer size on the output channel in Subscribe? The choice of buffer size of N here depends on knowing the number of messages server will receive and the number of messages downstream subscribers will consume. This is fragile: if we publish an additional message, or if one of the downstream subscribers reads any fewer messages, we will again have blocked goroutines. save commit remove reference counting fix test test client resubscribe test UnsubscribeAll client options [pubsub/query] fuzzy testing do not print msg as it creates data race!
8 years ago
new events package query parser use parser compiler to generate query parser I used https://github.com/pointlander/peg which has a nice API and seems to be the most popular Golang compiler parser using PEG on Github. More about PEG: - https://en.wikipedia.org/wiki/Parsing_expression_grammar - https://github.com/PhilippeSigaud/Pegged/wiki/PEG-Basics - https://github.com/PhilippeSigaud/Pegged/wiki/Grammar-Examples rename implement query match function match function uncomment test lines add more test cases for query#Matches fix int case rename events to pubsub add comment about cache assertReceive helper to not block on receive in tests fix bug with multiple conditions uncomment benchmark first results: ``` Benchmark10Clients-2 1000 1305493 ns/op 3957519 B/op 355 allocs/op Benchmark100Clients-2 100 12278304 ns/op 39571751 B/op 3505 allocs/op Benchmark1000Clients-2 10 124120909 ns/op 395714004 B/op 35005 allocs/op ``` 124ms to publish message to 1000 clients. A lot. use AST from query.peg.go separate pubsub and query packages by using Query interface in pubsub wrote docs and refactor code updates from Frey's review refactor type assertion to use type switch cleanup during shutdown subscriber should create output channel, not the server overflow strategies, server buffer capacity context as the first argument for Publish log error introduce Option type update NewServer comment move helpers into pubsub_test increase assertReceive timeout add query.MustParse add more false tests for parser add more false tests for query.Matches parse numbers as int64 / float64 try our best to convert from other types add number to panic output add more comments save commit introduce client argument as first argument to Subscribe > Why we do not specify buffer size on the output channel in Subscribe? The choice of buffer size of N here depends on knowing the number of messages server will receive and the number of messages downstream subscribers will consume. This is fragile: if we publish an additional message, or if one of the downstream subscribers reads any fewer messages, we will again have blocked goroutines. save commit remove reference counting fix test test client resubscribe test UnsubscribeAll client options [pubsub/query] fuzzy testing do not print msg as it creates data race!
8 years ago
new events package query parser use parser compiler to generate query parser I used https://github.com/pointlander/peg which has a nice API and seems to be the most popular Golang compiler parser using PEG on Github. More about PEG: - https://en.wikipedia.org/wiki/Parsing_expression_grammar - https://github.com/PhilippeSigaud/Pegged/wiki/PEG-Basics - https://github.com/PhilippeSigaud/Pegged/wiki/Grammar-Examples rename implement query match function match function uncomment test lines add more test cases for query#Matches fix int case rename events to pubsub add comment about cache assertReceive helper to not block on receive in tests fix bug with multiple conditions uncomment benchmark first results: ``` Benchmark10Clients-2 1000 1305493 ns/op 3957519 B/op 355 allocs/op Benchmark100Clients-2 100 12278304 ns/op 39571751 B/op 3505 allocs/op Benchmark1000Clients-2 10 124120909 ns/op 395714004 B/op 35005 allocs/op ``` 124ms to publish message to 1000 clients. A lot. use AST from query.peg.go separate pubsub and query packages by using Query interface in pubsub wrote docs and refactor code updates from Frey's review refactor type assertion to use type switch cleanup during shutdown subscriber should create output channel, not the server overflow strategies, server buffer capacity context as the first argument for Publish log error introduce Option type update NewServer comment move helpers into pubsub_test increase assertReceive timeout add query.MustParse add more false tests for parser add more false tests for query.Matches parse numbers as int64 / float64 try our best to convert from other types add number to panic output add more comments save commit introduce client argument as first argument to Subscribe > Why we do not specify buffer size on the output channel in Subscribe? The choice of buffer size of N here depends on knowing the number of messages server will receive and the number of messages downstream subscribers will consume. This is fragile: if we publish an additional message, or if one of the downstream subscribers reads any fewer messages, we will again have blocked goroutines. save commit remove reference counting fix test test client resubscribe test UnsubscribeAll client options [pubsub/query] fuzzy testing do not print msg as it creates data race!
8 years ago
new events package query parser use parser compiler to generate query parser I used https://github.com/pointlander/peg which has a nice API and seems to be the most popular Golang compiler parser using PEG on Github. More about PEG: - https://en.wikipedia.org/wiki/Parsing_expression_grammar - https://github.com/PhilippeSigaud/Pegged/wiki/PEG-Basics - https://github.com/PhilippeSigaud/Pegged/wiki/Grammar-Examples rename implement query match function match function uncomment test lines add more test cases for query#Matches fix int case rename events to pubsub add comment about cache assertReceive helper to not block on receive in tests fix bug with multiple conditions uncomment benchmark first results: ``` Benchmark10Clients-2 1000 1305493 ns/op 3957519 B/op 355 allocs/op Benchmark100Clients-2 100 12278304 ns/op 39571751 B/op 3505 allocs/op Benchmark1000Clients-2 10 124120909 ns/op 395714004 B/op 35005 allocs/op ``` 124ms to publish message to 1000 clients. A lot. use AST from query.peg.go separate pubsub and query packages by using Query interface in pubsub wrote docs and refactor code updates from Frey's review refactor type assertion to use type switch cleanup during shutdown subscriber should create output channel, not the server overflow strategies, server buffer capacity context as the first argument for Publish log error introduce Option type update NewServer comment move helpers into pubsub_test increase assertReceive timeout add query.MustParse add more false tests for parser add more false tests for query.Matches parse numbers as int64 / float64 try our best to convert from other types add number to panic output add more comments save commit introduce client argument as first argument to Subscribe > Why we do not specify buffer size on the output channel in Subscribe? The choice of buffer size of N here depends on knowing the number of messages server will receive and the number of messages downstream subscribers will consume. This is fragile: if we publish an additional message, or if one of the downstream subscribers reads any fewer messages, we will again have blocked goroutines. save commit remove reference counting fix test test client resubscribe test UnsubscribeAll client options [pubsub/query] fuzzy testing do not print msg as it creates data race!
8 years ago
new events package query parser use parser compiler to generate query parser I used https://github.com/pointlander/peg which has a nice API and seems to be the most popular Golang compiler parser using PEG on Github. More about PEG: - https://en.wikipedia.org/wiki/Parsing_expression_grammar - https://github.com/PhilippeSigaud/Pegged/wiki/PEG-Basics - https://github.com/PhilippeSigaud/Pegged/wiki/Grammar-Examples rename implement query match function match function uncomment test lines add more test cases for query#Matches fix int case rename events to pubsub add comment about cache assertReceive helper to not block on receive in tests fix bug with multiple conditions uncomment benchmark first results: ``` Benchmark10Clients-2 1000 1305493 ns/op 3957519 B/op 355 allocs/op Benchmark100Clients-2 100 12278304 ns/op 39571751 B/op 3505 allocs/op Benchmark1000Clients-2 10 124120909 ns/op 395714004 B/op 35005 allocs/op ``` 124ms to publish message to 1000 clients. A lot. use AST from query.peg.go separate pubsub and query packages by using Query interface in pubsub wrote docs and refactor code updates from Frey's review refactor type assertion to use type switch cleanup during shutdown subscriber should create output channel, not the server overflow strategies, server buffer capacity context as the first argument for Publish log error introduce Option type update NewServer comment move helpers into pubsub_test increase assertReceive timeout add query.MustParse add more false tests for parser add more false tests for query.Matches parse numbers as int64 / float64 try our best to convert from other types add number to panic output add more comments save commit introduce client argument as first argument to Subscribe > Why we do not specify buffer size on the output channel in Subscribe? The choice of buffer size of N here depends on knowing the number of messages server will receive and the number of messages downstream subscribers will consume. This is fragile: if we publish an additional message, or if one of the downstream subscribers reads any fewer messages, we will again have blocked goroutines. save commit remove reference counting fix test test client resubscribe test UnsubscribeAll client options [pubsub/query] fuzzy testing do not print msg as it creates data race!
8 years ago
new events package query parser use parser compiler to generate query parser I used https://github.com/pointlander/peg which has a nice API and seems to be the most popular Golang compiler parser using PEG on Github. More about PEG: - https://en.wikipedia.org/wiki/Parsing_expression_grammar - https://github.com/PhilippeSigaud/Pegged/wiki/PEG-Basics - https://github.com/PhilippeSigaud/Pegged/wiki/Grammar-Examples rename implement query match function match function uncomment test lines add more test cases for query#Matches fix int case rename events to pubsub add comment about cache assertReceive helper to not block on receive in tests fix bug with multiple conditions uncomment benchmark first results: ``` Benchmark10Clients-2 1000 1305493 ns/op 3957519 B/op 355 allocs/op Benchmark100Clients-2 100 12278304 ns/op 39571751 B/op 3505 allocs/op Benchmark1000Clients-2 10 124120909 ns/op 395714004 B/op 35005 allocs/op ``` 124ms to publish message to 1000 clients. A lot. use AST from query.peg.go separate pubsub and query packages by using Query interface in pubsub wrote docs and refactor code updates from Frey's review refactor type assertion to use type switch cleanup during shutdown subscriber should create output channel, not the server overflow strategies, server buffer capacity context as the first argument for Publish log error introduce Option type update NewServer comment move helpers into pubsub_test increase assertReceive timeout add query.MustParse add more false tests for parser add more false tests for query.Matches parse numbers as int64 / float64 try our best to convert from other types add number to panic output add more comments save commit introduce client argument as first argument to Subscribe > Why we do not specify buffer size on the output channel in Subscribe? The choice of buffer size of N here depends on knowing the number of messages server will receive and the number of messages downstream subscribers will consume. This is fragile: if we publish an additional message, or if one of the downstream subscribers reads any fewer messages, we will again have blocked goroutines. save commit remove reference counting fix test test client resubscribe test UnsubscribeAll client options [pubsub/query] fuzzy testing do not print msg as it creates data race!
8 years ago
  1. package query_test
  2. import (
  3. "testing"
  4. "github.com/stretchr/testify/assert"
  5. "github.com/tendermint/tmlibs/pubsub/query"
  6. )
  7. // TODO: fuzzy testing?
  8. func TestParser(t *testing.T) {
  9. cases := []struct {
  10. query string
  11. valid bool
  12. }{
  13. {"tm.events.type='NewBlock'", true},
  14. {"tm.events.type = 'NewBlock'", true},
  15. {"tm.events.name = ''", true},
  16. {"tm.events.type='TIME'", true},
  17. {"tm.events.type='DATE'", true},
  18. {"tm.events.type='='", true},
  19. {"tm.events.type='TIME", false},
  20. {"tm.events.type=TIME'", false},
  21. {"tm.events.type==", false},
  22. {"tm.events.type=NewBlock", false},
  23. {">==", false},
  24. {"tm.events.type 'NewBlock' =", false},
  25. {"tm.events.type>'NewBlock'", false},
  26. {"", false},
  27. {"=", false},
  28. {"='NewBlock'", false},
  29. {"tm.events.type=", false},
  30. {"tm.events.typeNewBlock", false},
  31. {"tm.events.type'NewBlock'", false},
  32. {"'NewBlock'", false},
  33. {"NewBlock", false},
  34. {"", false},
  35. {"tm.events.type='NewBlock' AND abci.account.name='Igor'", true},
  36. {"tm.events.type='NewBlock' AND", false},
  37. {"tm.events.type='NewBlock' AN", false},
  38. {"tm.events.type='NewBlock' AN tm.events.type='NewBlockHeader'", false},
  39. {"AND tm.events.type='NewBlock' ", false},
  40. {"abci.account.name CONTAINS 'Igor'", true},
  41. {"tx.date > DATE 2013-05-03", true},
  42. {"tx.date < DATE 2013-05-03", true},
  43. {"tx.date <= DATE 2013-05-03", true},
  44. {"tx.date >= DATE 2013-05-03", true},
  45. {"tx.date >= DAT 2013-05-03", false},
  46. {"tx.date <= DATE2013-05-03", false},
  47. {"tx.date <= DATE -05-03", false},
  48. {"tx.date >= DATE 20130503", false},
  49. {"tx.date >= DATE 2013+01-03", false},
  50. // incorrect year, month, day
  51. {"tx.date >= DATE 0013-01-03", false},
  52. {"tx.date >= DATE 2013-31-03", false},
  53. {"tx.date >= DATE 2013-01-83", false},
  54. {"tx.date > TIME 2013-05-03T14:45:00+07:00", true},
  55. {"tx.date < TIME 2013-05-03T14:45:00-02:00", true},
  56. {"tx.date <= TIME 2013-05-03T14:45:00Z", true},
  57. {"tx.date >= TIME 2013-05-03T14:45:00Z", true},
  58. {"tx.date >= TIME2013-05-03T14:45:00Z", false},
  59. {"tx.date = IME 2013-05-03T14:45:00Z", false},
  60. {"tx.date = TIME 2013-05-:45:00Z", false},
  61. {"tx.date >= TIME 2013-05-03T14:45:00", false},
  62. {"tx.date >= TIME 0013-00-00T14:45:00Z", false},
  63. {"tx.date >= TIME 2013+05=03T14:45:00Z", false},
  64. {"account.balance=100", true},
  65. {"account.balance >= 200", true},
  66. {"account.balance >= -300", false},
  67. {"account.balance >>= 400", false},
  68. {"account.balance=33.22.1", false},
  69. {"hash='136E18F7E4C348B780CF873A0BF43922E5BAFA63'", true},
  70. {"hash=136E18F7E4C348B780CF873A0BF43922E5BAFA63", false},
  71. }
  72. for _, c := range cases {
  73. _, err := query.New(c.query)
  74. if c.valid {
  75. assert.NoError(t, err, "Query was '%s'", c.query)
  76. } else {
  77. assert.Error(t, err, "Query was '%s'", c.query)
  78. }
  79. }
  80. }