> Lisp hackers have been effortlessly reshaping the language for decades using the powerful macro system and extending and bending the language to their will.

I've written a bit of Racket code (https://github.com/evdubs?tab=repositories&q=&type=&language...) and I still haven't written a macro. In only one case did I even think a macro would be useful: merging class member definitions to include both the type and the default value on the same line. It's sort of a shame that Racket, a Scheme with a much larger standard library and many great user-contributed libraries, has to deal with the Scheme/Lisp marketing of "you can build low level tools with macros" when it's more likely that Racket developers won't need to write macros since they're already written and part of the standard library.

> But the success of Parsec has filled Hackage with hundreds of bespoke DSLs for everything. One for parsing, one for XML, one for generating PDFs. Each is completely different, and each demands its own learning curve. Consider parsing XML, mutating it based on some JSON from a web API, and writing it to a PDF.

What a missed opportunity to preach another gospel of Lisp: s-expressions. XML and JSON are forms of data that are likely not native to the programming language you're using (the exception being JSON in JavaScript). What is better than XML or JSON? s-expressions. How do Lisp developers deal with XML and JSON? Convert it to s-expressions. What about defining data? Since you have s-expressions, you aren't limited to XML and JSON and you can instead use sorted maps for your data or use proper dates for your data; you don't need to fit everything into the array, hash, string, and float buckets as you would with JSON.

If you've been hearing about Lisp and you get turned off by all of this "you can build a DSL and use better macros" marketing, Racket has been a much more comfortable environment for a developer used to languages with large standard libraries like Java and C#.

How do Lisp developers deal with XML and JSON? Convert it to s-expressions.

As a common lisp developer, that is only very vaguely true for me.

The mapping I prefer for json<->Lisp is:

  true:  t
  false: nil
  null:  :null
  []     #()
  {}     (make-hash-table :test #'equal)
This falls out of my desire for the mapping to be bijective:

- The only built-in type that is unambiguously a mapping type is hash-tabe.

- nil is the only value that is falsy in CL

- () is the same as nil, so we can't use it as an empty list; vectors are the obvious alternative

- Not really any obvious values left to use for "null" so punt to a keyword.

In Kernel I would use something like this:

    true        #t
    false       #f
    null        ()
    [...]       (& ...)
    "k" : v     (: k v)
    {...}       (@ ...)  
Where &, :, @ are defined as:

    ($define! &
        ($lambda args (cons list args)))

    ($define! : 
        ($vau (key value) env
            (list key (eval value env))))
            
    ($define! @ 
        (wrap 
            ($vau kvpairs env 
                (eval (list* $bindings->environment kvpairs) env))))
Using the "person" example from the JSON/syntax section on Wikipedia:

    ($define! person
        (@
            (: first_name "John")
            (: last_name "Smith")
            (: is_alive #t)
            (: age 27)
            (: address 
                (@
                    (: street_address "21 2nd Street")
                    (: city "New York")
                    (: state "NY")
                    (: postal_code "10021-3100")))
            (: phone_numbers
                (& (@ (: type "home") (: number "212 555-1234"))
                   (@ (: type "office") (: number "646 555-4567"))))
            (: children
                (& "Catherine" "Thomas" "Trevor"))
            (: spouse ())))
I would then define `?`

    ($define! ? $remote-eval)
Now we can query the object.

    > (? age person)
    27

    > (? postal_code (? address person))
    "10021-3100"

    > (car (? children person))
    "Catherine"

    > (cdr (? children person))
    ("Thomas" "Trevor")

    > (? type (cadr (? phone_numbers person)))
    "office"

    > (? number (car (? phone_numbers person)))
    "212 555-1234"

    > ($define! full_name ($lambda (p) (string-append (? first_name p) " " (? last_name p))))
    > (full_name person)
    "John Smith"

In Clojure

    true: true
    false: false
    null: nil
    []: []
    {}: {}

For what it's worth, anytime I have written a macro it's usually not because it's needed, but just because I think it'll be fun :)

When I learned Scheme, I liked the language but strongly disliked macros and quotation. I'd only been using it a short while and when I searched for solutions to a few problems these "fexpr" things kept appearing up, which i didn't understand, and this "Kernel" language. I decided to learn it since "fexprs" were apparently the solution to several of my problems. This wasn't easy at first - I had to read the Kernel Report several times, but I ended up finding it way more intuitive than using macros and quotes.

I've not written a Scheme macro since. I've written hundreds of Kernel operatives though.

I was also a typoholic previously, but am in remission now thanks to Kernel.

https://web.cs.wpi.edu/~jshutt/kernel.html

Think of macros as what you want when you want to perform computation at compile time rather than run time.

An example: building the equivalent of a switch statement, but that compares (via string equality) with a set of strings. The macro would translate this into code that would do something like a decision tree on string length or particular characters at particular positions.

Basically anything that's done with a preprocessor in another language can be done with macros in Lisp family languages.

The other motivation for me is to drastically reduce boilerplate code. I can’t believe people here are saying they never use macros, they are so good for this that avoiding them sounds to me like a skill issue! Overuse can damage readability, sure, but so can pretending macros are not an option.

Operatives do that for me, better than macros. Parent is correct that macros are compile time, which gives them a performance advantage over operatives - but IMO, they're not better ergonomically. I find operatives simpler, cleaner and more powerful.

I understand the use case, but Scheme macros never felt intuitive to me. I think it may be the quotation more than anything that I dislike - though I also dislike that they're second class (which was the key thing which led me to Kernel).

I use C preprocessor macros extensively and don't have the typical dislike for them that many people have - though I clearly understand their limitations and the advantage Scheme macros have over them.

Since learning Kernel, the boundary of "compile time" and "runtime" is more blurry - I can write operatives which behave somewhat like a macro, and I do more "multi-stage" programming, where one operative optimizes its argument to produce something more efficient which is later evaluated - though there are still limitations due to the inability to fully compile Kernel.

As one example, I've used a kind of operative I call a "template", which evaluates its free symbols ahead of time but doesn't actually evaluate the body. When we later apply the some operands it replaces the bound symbols with the operands, looking up any symbols to produce an expression which we don't need to immediately evaluate either - but this expression has all symbols fully resolved. This is somewhere between a macro and regular operative.

Consider:

    ($define! z 10)

    ($define! @add-z
        ($template (x y)
            (+ x y z)))
In this template `x` and `y` are bound variables and `+` and `z` are free. The template resolves the free symbols and returns an operative expecting 2 operands, effectively providing an operative with the body:

    ([#applicative: +] x y 10)
When we call the template with the two operands, it resolves any symbols in the arguments and returns the full expression with no symbols present, but it doesn't evaluate the expression yet.

    > ($let ((x 9)
             (y 7))
          (@add-z (* x 3) (- y 13)))
    ([#applicative: +] ([#applicative: *] 9 3) ([#applicative: -] 7 13) 10)
When we decide to evaluate the expression, no symbol lookup is necessary - it can perform the operation rather quickly, despite the slow interpretation.

---

The $template form above isn't too difficult to implement. I've iterated several forms of this - some which only partially resolved the bound symbols, but lost them in a RAID failure. An earlier version which has some issues I still have because I put it online:

    ($provide! ($template)
        ($define! $resolve-free-symbols
            ($vau (params expr) env
                ($cond
                    ((null? expr) ())
                    ((pair? expr)
                        (cons (apply (wrap $resolve-free-symbols) 
                                     (list params (car expr)) 
                                     env)
                              (apply (wrap $resolve-free-symbols) 
                                     (list params (cdr expr)) 
                                     env)))
                    ((symbol? expr)
                        ($if (member? expr params)
                             expr
                             (eval expr env)))
                    (#t expr))))

        ($define! $resolve-bound-symbols
            ($vau (params expr) env
                ($cond
                    ((null? expr) ())
                    ((pair? expr)
                        (cons (apply (wrap $resolve-bound-symbols)
                                     (list params (car expr))
                                     env)
                              (apply (wrap $resolve-bound-symbols)
                                     (list params (cdr expr))
                                     env)))
                    ((symbol? expr)
                        ($if (member? expr params)
                             (eval expr env)
                             expr))
                    (#t expr))))

        ($define! zip
            ($lambda (fst snd)
                ($cond
                    (($and? (null? fst) (null? snd)) ())
                    (($and? (pair? fst) (pair? snd))
                        (cons (list (car fst)
                                    (list* (($vau #ignore #ignore list)) (car snd)))
                              (zip (cdr fst) (cdr snd)))))))

        ($define! $template
            ($vau (params body) senv
                ($let ((newbody 
                        (eval (list $resolve-free-symbols params body) senv)))
                    ($vau args denv
                        (eval (list $resolve-bound-symbols params newbody)
                              (eval (list* $bindings->environment
                                           (zip params args))
                                    denv))))))) 

---

At present the best interpreter is klisp, and the fastest is bronze-age-lisp, which uses klisp - with parts of hand-written 32-bit x86 assembly.

I've been working on a faster interpreter for a number of years as a side project, optimized for x86_64 with some parts C and some parts assembly. It has diverged in some parts from the Kernel report, but still retains what I see are the key ingredients.

My modified Kernel has optional types, and we have operatives to `$typecheck` complex expressions ahead of evaluating them. I intend to go all in on the "multi-stage" aspect and have operatives to JIT-compile expressions in a manner similar to the above template.

Which implementation do you use?

I use klisp[1] and bronze-age-lisp[2] mostly for testing, as they're the closest to a feature complete implementation of the Kernel Report.

I've written a number of less complete interpreters over the years. I currently have a long-running side-project to provide a more complete, highly optimized implementation for x86_64.

[1]:https://github.com/dbohdan/klisp

[2]:https://github.com/ghosthamlet/bronze-age-lisp

Sometime back 15 years ago [0], I hit a bit of an existential crisis regarding my career and the kind of work I was doing.

I thought the particular technology I was working in was "part of the problem", as I felt pigeon-holed by .NET and C# to always be a corporate-monkey CRUD consultant. So, I went out in search of something better. Different programming languages. Different environments. Just something that wasn't working for asshole clients who thought it was okay to yell at people about an outage in a hotel on the complete opposite side of the country that was more due to local radio interference than anything I had done in the database code that configured things. Long story involving missing a holiday with my family over something completely outside of my control and yet I still got blamed for it. The problem wasn't the technology, it was the company I was working for, but at that time in my life, I didn't understand the difference.

Racket was a life preserver at that time.

It's really hard to explain, because I never actually ended up working in Racket full-time and I haven't even touched it in probably 10 years. But it still has this impact on my identity as a software developer. I learned Racket. I forced myself out of being a Glub programmer and into someone who saw the strings that underwrote The Universe. The beauty of S-Expressions and syntactic forms and code-is-data and all that. It had a permanent impact on my view of what this job could be.

I still work primarily in .NET. Most of the things that were technological issues about .NET Framework got absolved by what was first .NET Core and what is now .NET. So, I no longer feel like my tools are holding me back. And I'll forever be thankful to Racket (and the community! The Racket listserve was amazing back then. Probably still is, I just don't interact with it anymore) for being there for me.

Edit: Haskell was in fact another language I explored at that time, in addition to Ocaml and Ruby and Python (ugh! Don't get me started on Python!) and many other things. They were all "cool" in their own way, but nothing felt like Racket. They all had their own weird rules that felt like being bossed at again. Racket felt like art. Racket felt like it was there for me, not the other way around.

[0] I still think of this time as the "mid-point" in my career, but it's now been long enough ago that I've been more past the crisis than I was ever in it. Strange feelings.

> [...] who thought it was okay to yell at people about [...]

That society as a whole accepts this kind of abuse, no matter industry or circumstances, is beyond me. It's an abuse of power. If anybody did this to anyone, the only appropriate response should be to walk and never come back. Nobody would want to accept this kind of crap from family and friends, so why is it ok in a professional setting? Because of the money/power dynamics at play? We need consensus in society to walk, that would end it in no time.

> Nobody would want to accept this kind of crap from family and friends

Hm… I think I have bad news for you.

Yes, often you're the pressure-release valve for urges that friends and family otherwise suppress. Especially family.