• 260 Posts
  • 50 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
































  • I could understand method = associated function whose first parameter is named self, so it can be called like self.foo(…). This would mean functions like Vec::new aren’t methods. But the author’s requirement also excludes functions that take generic arguments like Extend::extend.

    However, even the above definition gives old terminology new meaning. In traditionally OOP languages, all functions in a class are considered methods, those only callable from an instance are “instance methods”, while the others are “static methods”. So translating OOP terminology into Rust, all associated functions are still considered methods, and those with/without method call syntax are instance/static methods.

    Unfortunately I think that some people misuse “method” to only refer to “instance method”, even in the OOP languages, so to be 100% unambiguous the terms have to be:

    • Associated function: function in an impl block.
    • Static method: associated function whose first argument isn’t self (even if it takes Self under a different name, like Box::leak).
    • Instance method: associated function whose first argument is self, so it can be called like self.foo(…).
    • Object-safe method: a method callable from a trait object.


  • I find writing the parser by hand (recursive descent) to be easiest. Sometimes I use a lexer generator, or if there isn’t a good one (idk for Scheme), write the lexer by hand as well. Define a few helper functions and macros to remove most of the boilerplate (you really benefit from Scheme here), and you almost end up writing the rules out directly.

    Yes, you need to manually implement choice and figure out what/when to lookahead. Yes, the final parser won’t be as readable as a BNF specification. But I find writing a hand-rolled parser generator for most grammars, even with the manual choice and lookahead, is surprisingly easy and fast.

    The problem with parser generators is that, when they work they work well, but when they don’t work (fail to generate, the generated parser tries to parse the wrong node, the generated parser is very inefficient) it can be really hard to figure out why. A hand-rolled parser is much easier to debug, so when your grammar inevitably has problems, it ends up taking less time in total to go from spec to working hand-rolled vs. spec to working parser-generator-generated.

    The hand-rolled rules may look something like (with user-defined macros and functions define-parse, parse, peek, next, and some simple rules like con-id and name-id as individual tokens):

    ; pattern			::= [ con-id ] factor "begin" expr-list "end"
    (define-parse pattern
      (mk-pattern
        (if (con-id? (peek)) (next))
        (parse factor)
        (do (parse “begin”) (parse expr-list) (parse “end”))))
    
    ; factor 			::= name-id 
    ; 			 | symbol-literal
    ; 			 | long-name-id 
    ; 			 | numeric-literal 
    ;	 		 | text-literal 
    ; 			 | list-literal 
    ; 			 | function-lambda 
    ; 			 | tacit-arg
    ; 			 | '(' expr ')' 
    (define-parse factor
      (case (peek)
        [name-id? (if (= “.” (peek2)) (mk-long-name-id …) (next))]
        [literal? (next)]
        …))
    

    Since you’re using Scheme, you can almost certainly optimize the above to reduce even more boilerplate.

    Regarding LLMs: if you start to write the parser with the EBNF comments above each rule like above, you can paste the EBNF in and Copilot will generate rules for you. Alternatively, you can feed a couple EBNF/code examples to ChatGPT and ask it to generate more.

    In both cases the AI will probably make mistakes on tricky cases, but that’s practically inevitable. An LLM implemented in an error-proof code synthesis engine would be a breakthrough; and while there are techniques like fine-tuning, I suspect they wouldn’t improve the accuracy much, and certainly would amount to more effort in total (in fact most LLM “applications” are just a custom prompt on plain ChatGPT or another baseline model).



  • My general take:

    A codebase with scalable architecture is one that stays malleable even when it grows large and the people working on it change. At least relative to a codebase without scalable architecture, which devolves into “spaghetti code”, where nobody knows what the code does or where the behaviors are implemented, and small changes break seemingly-unrelated things.

    Programming language isn’t the sole determinant of a codebase’s scalability, especially if the language has all the general-purpose features one would expect nowadays (e.g. Java, C++, Haskell, Rust, TypeScript). But it’s a major contributor. A “non-scalable” language makes spaghetti design decisions too easy and scalable design decisions overly convoluted and verbose. A scalable language does the opposite, nudging developers towards building scalable software automatically, at least relative to a “non-scalable” language and when the software already has a scalable foundation.




  • I believe the answer is yes, except that we’re talking about languages with currying, and those can’t represent a zero argument function without the “computation” kind (remember: all functions are Arg -> Ret, and a multi-argument function is just Arg1 -> (Arg2 -> Ret)). In the linked article, all functions are in fact “computations” (the two variants of CompType are Thunk ValType and Fun ValType CompType). The author also describes computations as “a way to add side-effects to values”, and the equivalent in an imperative language to “a value which produces side-effects when read” is either a zero-argument function (getXYZ()), or a “getter” which is just syntax sugar for a zero-argument function.

    The other reason may be that it’s easier in an IR to represent computations as intrinsic types vs. zero-argument closures. Except if all functions are computations, then your “computation” type is already your closure type. So the difference is again only if you’re writing an IR for a language with currying: without CBPV you could just represent closures as things that take one argument, but CBPV permits zero-argument closures.