Documentation#

This documentation starts with a quickstart section with a code showing main features of the library. The code gets explained below in 4 following secitons all tied up to the quickstart and showing alternative methods to form an overall understanding of how this tool works:

  1. Setting up the mod.

  2. Searching and comparing things.

  3. Editing things.

  4. Creating/deleting things.

If you struggle to understand how ndf entities are represented in the model then read this primer on the model relations.

Quick Start#

Before We Begin#

First thing we need to do is to set up a pristine copy of the mod sources. This module works in the following way:

  1. Get a file from an unchanged mod (we’ll refer to it as a source mod).

  2. Parse, convert to model representation.

  3. Apply our edits.

  4. Format back to ndf and write it to a directory with our final mod (we’ll refer to it as a destination mod).

Generate a new source mod using CreateNewMod.bat provided by Eugen Systems and place it to path/to/src/mod.

Make The Script#

We are going to make a mod that adds a 'HE' trait to all autocannons, then we are going to add 2 new weapon types. Not the most useful mod but it is enough to demonstrate the basic workflow.

Create a new python file, say, my_mod.py with the following code:

quick_start.py#
 1import ndf_parse as ndf
 2
 3# change PATH_TO_SRC_MOD and PATH_TO_DEST_MOD to actual paths, like this:
 4# ndf.Mod(r"C:\game\mods\src_mod", r"C:\game\mods\dest_mod")
 5# `src_mod` must be a root folder of the source mod, i.e. the one where
 6# folders CommonData and GameData reside.
 7mod = ndf.Mod(PATH_TO_SRC_MOD, PATH_TO_DEST_MOD)
 8mod.check_if_src_is_newer()
 9
10with mod.edit(r"GameData\Generated\Gameplay\Gfx\Ammunition.ndf") as source:
11    # let's find all automatic cannons and edit them a bit
12    # quotes in pattern must match source!     v          v
13    pattern = "TAmmunitionDescriptor(Caliber = 'DYDXERZARY')"
14    for obj_row in source.match_pattern(pattern):
15        # each time we get here it means that we've got ammunition of matching caliber
16        print(f"Processing {obj_row.namespace}... ", end='')
17        traits_row = obj_row.v.by_member("TraitsToken")
18        if any(item.v == "'HE'" for item in traits_row.v):
19            # skip this ammo if it already has a given trait
20            print("    already has 'HE' trait, skipping.")
21            continue  # note: this means "skip code below and CONTINUE to the next loop
22                      # iteration", NOT "continue execution below"
23        # 30mm that has no HE, let's fix that
24        print("    adding 'HE' trait.")
25        traits_row.v.add(value="'HE'")  # this will get converted under the hood
26                                        # into a row with value 'HE'
27
28    # now let's add 2 new types of ammo
29    EDITS = [
30        {
31            "donor": "Ammo_AutoCanon_AP_30mm_24A2",
32            "new_name": "Ammo_BFG_30mm",
33            "guid": "GUID:{6b41aa60-9fd7-4c47-8614-c7b6e8009ef3}",
34            "dispers_min": "0",
35            "dispers_max": "Metre",
36            "damage": "10.0",
37        },
38        {
39            "donor": "Ammo_GatlingAir_ADEN_Mk4_30mm_x2",
40            "new_name": "Ammo_ImDrunk_30mm",
41            "guid": "GUID:{1e285336-37e9-41c1-9b67-2bab21271bfc}",
42            "dispers_min": "((500) * Metre)",
43            "dispers_max": "((1000) * Metre)",
44            "damage": "2.0",
45        },
46    ]
47
48    for edit in EDITS:
49        # grab a copy of a row that matches out needs the most
50        gun_donor_row = source.by_namespace(edit["donor"]).copy()
51        # rename it
52        gun_donor_row.namespace = edit["new_name"]
53        print(f"Building {gun_donor_row.namespace}... ", end='')
54        # apply edits member rows of the ammo data
55        ammo = gun_donor_row.v
56        ammo.by_member("DescriptorId").v = edit["guid"]
57        ammo.by_member("DispersionAtMinRange").v = edit["dispers_min"]
58        ammo.by_member("DispersionAtMaxRange").v = edit["dispers_max"]
59        ammo.by_member("DescriptorId").v = edit["guid"]
60        ammo.by_member("PhysicalDamages").v = edit["damage"]
61        # add new ammo descriptor to the source file
62        source.add(gun_donor_row)
63        print(f"added with an index of {gun_donor_row.index}")
64print("DONE!")

Run this script (but don’t forget to substitute your mod paths first!) You should see prints that correspond to operations from this sctipt. Now, if you navigate to your newly generated mod and check Ammunition.ndf, you should find that every autocannon has the 'HE' trait and at the end of the file there are 2 new ammo types.

Quick Start Explained#

Setting Up#

Now let’s go line by line and examine what this code does.

7mod = ndf.Mod(PATH_TO_SRC_MOD, PATH_TO_DEST_MOD)
8mod.check_if_src_is_newer()

Here we initialize our Mod object. It’s nothing but a convenience wrapper that saves you time from writing boilerplate code. Second line here checks if your source mod was updated. Whenever the game gets an update - you should delete your source mod and regenerate it anew. Next time when you run the script, that second line detects it (by comparing modification date of source and destination folders), nukes the destination mod and makes a new fresh copy of it from the source.

Warning

Never store anything important inside of source and destination folders! It will get nuked by such an update. Store it elsewhere or regenerate it with your script.

Edits#

10with mod.edit(r"GameData\Generated\Gameplay\Gfx\Ammunition.ndf") as source:

Mod.edit() loads the file, parses it, converts to a python representation (based on model), stores it internally as an Edit and returns a Mod back. But since Mod is implemented as a context manager, within with statement it returns a model.List that represents our ndf file and which we can alter. as soon as with statement’s scope is closed (i.e. when all of the operations defined in this block are completed), it will automatically format the List back to ndf code and write the file out to the destination mod.

Note

If you are just tinkering with the code and don’t want to write the file out on each test run, you can disable it by adding the following argument:

10with mod.edit(r"GameData\Generated\Gameplay\Gfx\Ammunition.ndf", False) as source:

You can also manually manage a bunch of edits at the same time. Suppose you want to rework ammunition for specific decks only. For that you would need at least 4 files: Ammunition.ndf, WeaponDescriptor.ndf, UniteDescriptor.ndf and Decks.ndf of which you will edit only 3 (no edits for Decks). Then you would do the following:

manual_write_control.py#
 1import ndf_parse as ndf
 2mod = ndf.Mod("path/to/src/mod", "path/to/dst/mod")
 3# We don't want to write out decks because we only use them to query specific
 4# units, so we set `save` argumet to ``False``
 5decks_src = mod.edit(r"GameData\Generated\Gameplay\Decks\Decks.ndf", False).current_tree
 6# others we will modify so we leave `save` at default (``True``)
 7units_src = mod.edit(r"GameData\Generated\Gameplay\Gfx\UniteDescriptor.ndf").current_tree
 8weap_src  = mod.edit(r"GameData\Generated\Gameplay\Gfx\WeaponDescriptor.ndf").current_tree
 9ammo_src  = mod.edit(r"GameData\Generated\Gameplay\Gfx\Ammunition.ndf").current_tree
10
11...  # here we do all the work with 4 sources
12
13for edit in mod.edits:
14    mod.write_edit(edit, False)  # ``False`` disables forced writing so it
15                                 # respects `edit.save` attribute

Search Tools#

Currently there are 4 main ways to search items of interest:

  1. model.abc.List.match_pattern. Good for matching items with some shared “trait” in a single list-like (as in the quickstart code, lines 12-13).

  2. A recursive walk(). Good for cases when one needs to walk an entire subtree (i.e. also search inside children of a list-like) and/or match on a complex parameter. If we were to reformulate our pattern search with a walker, it would look like this:

    quick_start_v2.py#
    10def is_autocannon(row):
    11    return (isinstance(row, ndf.model.ListRow)  # ensure it's a row,
    12            # because `walk` is recursice and compares everything here,
    13            # including source itself!
    14        and isinstance(row.v, ndf.model.Object)  # ensure there is an
    15            # object in this row
    16        and row.v.type == "TAmmunitionDescriptor"  # ensure it's the
    17            # type we need. `type` attr is specific to ObjectRow and
    18            # it's subclasses!
    19        and row.v.by_member("Caliber").v == "'DYDXERZARY'"  # ensure it's a
    20            # 30mm autocannon. Note embedded ^  single  ^ quotes!
    21        and not any(item.v == "'HE'" for item in row.v.by_member("TraitsToken").v)
    22        )
    23
    24with mod.edit(r"GameData\Generated\Gameplay\Gfx\Ammunition.ndf") as source:
    25    for row in ndf.walk(source, is_autocannon):
    26        # any row here is a 30mm autocannon with no 'HE', just ad one
    27        row.v.by_member("TraitsToken").v.add(value="'HE'")
    

    Cons: very verbose (all filtering is explicit), processes too many extra nodes, we call by_member a couple times which is a bit costly.

    Pros: allows for very complex filtering, walks entire tree which is necessary in some cases.

    To summarize: a very specific filter for very specific tasks that is good to have non the less.

  3. by_namespace(), by_member(), by_param(), by_key() and similar methods for finding a single unique element. Strict by default (i.e. will raise an error and terminate execution if item is not found) which is good for not getting surprising silent bugs if Eugen removes some field one was relying on.

  4. Manually compare() list-likes and compare() rows. In fact this is what is used under the hood for abc.List.match_pattern().

Get and Set Values#

Get and Set Values for Rows#

Getting and setting values in rows is pretty straightforward:

>>> import ndf_parse as ndf
>>> row = ndf.model.ListRow.from_ndf("Namespace is 12")
>>> row.namespace  # get namespace
'Namespace'
>>> row.n  # same but with an alias
'Namespace'
>>> row.value  # get value (also has an alias, `v`)
'12'
>>> # get row's visibility (has an alias `vis`, stores values like
>>> # 'unnamed', 'export' etc.)
>>> row.visibility
>>> # ^ will not print anything because it's `None`
>>> #
>>> # set a value
>>> row.v = 24
>>> row.namespace = "NewName"
>>> row.vis = 'export'
>>> ndf.printer.print(row)
export NewName is 24

All possible values and methods for rows are documented in model plus shared methods are described in model.abc.Row.

Get and Set Values for List-Likes#

List-likes can be queried for items as any other pythonic list. Above that they have methods for searching (and removing) rows by specific attributes (as was demonstrated in search tools). They also share a bunch of methods like add() (used in quickstart to add a 'HE' trait), insert(), replace() and remove(). Their API is well documented in their parent class.

A couple things worth mentioning:

  1. Map differs from others in that it also accepts pairs as row representations:

    >>> import ndf_parse as ndf
    >>> mymap = ndf.model.Map()
    >>> mymap.add(("'key1'", "'value1'"), ("'key2'", "24"))
    [MapRow[0](key="'key1'", value="'value1'"),
    MapRow[1](key="'key2'", value='24')]
    >>> ndf.printer.print(mymap)
    MAP[('key1', 'value1'), ('key2', 24)]
    
  2. model.List, model.Object and model.Template all store their type inside of their own type attribute instead of row’s one. This issue is covered in detail in typing ambiguity section.

  3. model.List differs in how it parses ndf code snippets.

  4. All row types are convertable into an integer. They return their index within a parent list-like. This allows us to do the following:

    >>> row_from, row_to  # two rows from a list
    (MapRow[3](key='Key3', value='3'),
    MapRow[6](key='Key6', value='6'))
    >>> mymap  # the list
    Map[MapRow[0](key='Key0', value='0'),
    MapRow[1](key='Key1', value='1'),
    MapRow[2](key='Key2', value='2'),
    MapRow[3](key='Key3', value='3'),
    MapRow[4](key='Key4', value='4'),
    MapRow[5](key='Key5', value='5'),
    MapRow[6](key='Key6', value='6'),
    MapRow[7](key='Key7', value='7')]
    >>> del mymap[row_from : row_to]
    >>> mymap  # list has now lost 3 rows including the `row_from`
    Map[MapRow[0](key='Key0', value='0'),
    MapRow[1](key='Key1', value='1'),
    MapRow[2](key='Key2', value='2'),
    MapRow[3](key='Key6', value='6'),
    MapRow[4](key='Key7', value='7')]
    >>> row_from  # it is now dangling
    MapRow[DANGLING](key='Key3', value='3')
    >>> row_to  # this row is tracking it's position accordingly
    MapRow[3](key='Key6', value='6')
    >>> mymap.remove(row_to) # remove the second too
    MapRow[DANGLING](key='Key6', value='6')
    

    Caution

    Just don’t try using a dangling pointer as an index, it will crash.

Create New Values#

Create New List-Likes#

Lists are all mostly created via a direct constructor call. The logic is “make a new list and add stuff after”. Examples:

>>> import ndf_parse as ndf
>>> md = ndf.model  # an alias for brevity
>>> ndf_print = ndf.printer.print  # also an alias
>>> # a new map
>>> md.Map()
Map[]
>>> # a new template
>>> tpl = md.Template()
>>> tpl
Template[]
>>> tpl.params  # we can also query it's md.Params section if we want
Params[]
>>> md.Params()  # or make a new one
Params[]
>>> # a new list
>>> lst = md.List()
>>> ndf_print(lst)
[]
>>> lst.type = "RGB"  # make it typed
>>> ndf_print(lst)
RGB[]
>>> ndf_print(md.List(type="RGB"))  # or create it already typed
RGB[]
>>> # a new source
>>> md.List(is_root=True)
List[]
>>> # yes, source is just a List with a flag on. And you can convert
>>> # one into the other just by altering `lst.is_root` parameter.
>>> # this also affects the type of snippets it accepts, more on that
>>> # in "Source Is a List But It's Not" section.`

It would be nice to have an option to directly initialize them with code snippets, like md.List("A is 12, B is 24") but there are 2 issues: List is a special case that makes parsing snippets context-dependent and a risk of type collisions for list-like args vs row args. This is resolvable in the future, it just requires time. For now if one is really determined in creating lists via snippets, there is a workaround:

>>> import ndf_parse as ndf
>>> md = ndf.model
>>> ndf_print = ndf.printer.print
>>> # a new source from a snippet
>>> source = ndf.convert("A is 12\nB is 24\nC is Obj(memb = 12)")  # \n denotes a newline
>>> ndf_print(source)
A is 12
B is 24
C is Obj
(
    memb = 12
)
>>> source = ndf.convert("""
... A is 12
... B is 24
... C is Obj(memb = 12)
... """) # snippet with multiline string, a bit easier to read
>>> ndf_print(source)
A is 12
B is 24
C is Obj
(
    memb = 12
)
>>> # any Map/List/Object/Template (Param is a bit tricky, but doable)
>>> # snippet below will return a dict with row's arguments mapped as keys
>>> # (it's this way because of how internal converter works, just accept it as is)
>>> dict_wrapped = ndf.expression("MAP[('A', 1), ('B', 2)]")
>>> dict_wrapped
{'value': Map[MapRow[0](key="'A'", value='1'), MapRow[1](key="'B'", value='2')]}
>>> dict_wrapped['value']  # fetch the row itself
Map[MapRow[0](key="'A'", value='1'), MapRow[1](key="'B'", value='2')]
>>> ndf.expression("MAP[('A', 1), ('B', 2)]")['value']  # same, just a oneliner
Map[MapRow[0](key="'A'", value='1'), MapRow[1](key="'B'", value='2')]
>>> # you can make a helper function if ypu use it alot
>>> def mk_listlike(snippet):
...     return ndf.expression(snippet)['value']
...
>>> mk_listlike("[1, 2, 3]")
List[ListRow[0](value='1', visibility=None, namespace=None),
ListRow[1](value='2', visibility=None, namespace=None),
ListRow[2](value='3', visibility=None, namespace=None)]
>>> mk_listlike("MAP[('A', 1), ('B', 2)]")
Map[MapRow[0](key="'A'", value='1'), MapRow[1](key="'B'", value='2')]
>>> mk_listlike("Obj(memb1 = 12\nmemb2 = 24)")
Object[MemberRow[0](value='12', member='memb1', type=None,
visibility=None, namespace=None),
MemberRow[1](value='24', member='memb2', type=None,
visibility=None, namespace=None)]
>>> mk_listlike("template Templ[parm1, parm2 = 12] is Obj(memb1 = <parm1>\nmemb2 = <parm2>)")
Template[MemberRow[0](value='<parm1>', member='memb1', type=None, visibility=None, namespace=None),
MemberRow[1](value='<parm2>', member='memb2', type=None, visibility=None, namespace=None)]
>>> # or grab just params of a template
>>> mk_listlike("template T[parm1, parm2 = 12] is Obj()").params
Params[ParamRow[0](param='parm1', type=None, value=None),
ParamRow[1](param='parm2', type=None, value='12')]
>>> # and here is a demonstration of how `expression` works since we're at it
>>> ndf.expression("template T[parm1, parm2 = 12] is Obj()")
{'value': Template[], 'namespace': 'T'}
>>> ndf.expression("export Name is Obj()")
{'value': Object[], 'namespace': 'Name', 'visibility': 'export'}

Create New Rows#

New rows can be created in a multitude of ways:

  1. Create rows directly inside of a list:

    >>> import ndf_parse as ndf
    >>> lst = ndf.model.List()
    >>> lst.add("A is 12")  # create via snippet
    ListRow[0](value='12', visibility=None, namespace='A')
    >>> lst.add({'value': "24", 'namespace': "B"})  # create as a dict
    ListRow[1](value='24', visibility=None, namespace='B')
    >>> lst.add(value="42", namespace="C")  # create directly via args
    ListRow[2](value='42', visibility=None, namespace='C')
    

    Note

    List-likes also support methods like insert(), replace(), remove() as well as pythonic getters/setters (lst[0] = …). Please read the reference, it’s extensively documented there plus some additional info (like ability to add rows via lists and iterables).

    Note

    Map has additional ability to accept rows as plain python tuples:

    >>> mymap = ndf.model.Map()
    >>> mymap.add(("A", "1"))
    MapRow[0](key='A', value='1')
    >>> # pythonic way with a caveat
    >>> mymap[0] = ("B", "2"),  # >>> NOTE THE COMMA AT THE END <<<
    >>> mymap
    Map[MapRow[0](key='B', value='2')]
    

    You can find out here why there is an extra comma in a pythonic setter.

  2. Create dangling rows with snippets:

    >>> import ndf_parse as ndf
    >>> md = ndf.model
    >>> md.ListRow.from_ndf("private Namespace is Value")
    ListRow[DANGLING](value='Value', visibility='private', namespace='Namespace')
    >>> md.MemberRow.from_ndf("member_name : member_type = Value")
    MemberRow[DANGLING](value='Value', member='member_name',
    type='member_type', visibility=None, namespace=None)
    >>> md.ParamRow.from_ndf("param_name: param_type = Value")
    ParamRow[DANGLING](param='param_name', type='param_type', value='Value')
    >>> md.MapRow.from_ndf("('key', Value)")  # note there are parenthesies!
    MapRow[DANGLING](key="'key'", value='Value')
    
  3. Create dangling rows manually:

    >>> # from dict decomposition (supports aliases)
    >>> md.ListRow(**{'value':'Value', 'vis':'private', 'n':'Namespace'})
    ListRow[DANGLING](value='Value', visibility='private', namespace='Namespace')
    >>> # from args (supports aliases)
    >>> md.MemberRow(v='Value', m='member_name', t='member_type', vis=None, n=None)
    MemberRow[DANGLING](value='Value', member='member_name',
    type='member_type', visibility=None, namespace=None)
    >>> # special case for Map - pair tuple
    >>> md.MapRow(('key', 'Value'))
    MapRow[DANGLING](key='key', value='Value')
    

Delete Items#

To delete a value from a row simply replace it with None (for optionals) or a new value (for mandatory parameters, like ListRow.value):

>>> import ndf_parse as ndf
>>> row = ndf.model.ListRow.from_ndf("A is Obj(memb = 12)")
>>> val = row.value
>>> val.parent_row  # inner list-like has the row as it's parent
ListRow[DANGLING](value=Object[MemberRow[0](value='12', member='memb',
type=None, visibility=None, namespace=None)], visibility=None, namespace='A')
>>> row.value = "12"
>>> # note that on replacing the value the row automatically unparents the
>>> # inner list-like
>>> val.parent_row is None
True

To delete a row from a list simply use:

>>> source = ndf.convert("A is 12\nB is 24\nC is 42")
>>> del source[1]
>>> source
List[ListRow[0](value='12', visibility=None, namespace='A'),
ListRow[1](value='42', visibility=None, namespace='C')]
>>> # we can also use the row itself as an index if needed
>>> row = source[0]
>>> row
ListRow[0](value='12', visibility=None, namespace='A')
>>> del source[row]
>>> # note that on deleting the row the list automatically unparents it
>>> row
ListRow[DANGLING](value='12', visibility=None, namespace='A')

Printing an NDF Code Out#

If you want to print data out (for debugging purposes or whatever), you can do the following:

 1import ndf_parse as ndf
 2
 3data = """Obj1 is Type1(
 4    member1 = Obj2 is Type1(
 5        member1 = nil
 6    )
 7)"""
 8
 9source = ndf.convert(data)  # manually convert data instead of using ndf.Mod
10obj_view = source[0]
11
12print("// Complete assignment statement (printing the whole row):")
13ndf.printer.print(obj_view)
14print("// Object declaration only (row's value only):")
15ndf.printer.print(obj_view.value)

This code should print out the following:

Ndf Output#
// Complete assignment statement (printing the whole row):
Obj1 is Type1
(
    member1 = Obj2 is Type1
    (
        member1 = nil
    )
)
// Object declaration only (row's value only):
Type1
(
    member1 = Obj2 is Type1
    (
        member1 = nil
    )
)

There are 2 other functions you might find useful:

General Recommendations and Caveats#

Errors Suppression#

Avoid using try clause or any other silently failing operations. If Eugen renames or moves objects or members that you’re editing - it’s in your best interest to let the script fail instead of silently ignoring missing member or namespace. That way you will know 100% something has changed in the source code and needs fixing instead of bashing your head over a compiled mod that doesn’t do what you expect from it.

For that reason some functions use strict argument with True by default (more on that here) that forces them to fail if anything is off. Don’t turn those off unless you really know that it won’t hurt you in the long run.

Nested Edits#

Avoid nesting with mod.edit(...) inside of another with mod.edit(...) if they both access the same source file. First clause will build an independent tree from pristine source mode. Second one will build another independent tree from pristine source mode. When second clause ends, your file gets written out with all the changes you made in the second clause. But your first tree still holds data from original unedited tree. As soon as it gets written out, it will overwrite anything you did in the second clause.

Syntax Checking Strictness#

tree-sitter-ndf parser is not a language server so it will allow for some not-quite-correct expressions. It will only catch the most bogus syntax errors while will let through things like excessive commas, multiple unnamed definitions, clashing namespaces and member definitions at the root level. You can read more on this in tree-sitter-ndf’s README.md.

Source Is a List But It’s Not#

Ndf syntax is inconsistent with respect to usage of commas. If you have an ndf list then you have to separate entries with commas:

Ndf Code#
[var1 is 12, var2 is 24, var3 is 42, SomeObject is Type(member1 = 12)]

On the other hand, the root level declarations always use newlines and never commas:

Ndf Code#
var1 is 12
var2 is 24
var3 is 42
SomeObject is Type(member1 = 12)

Both declarations operate virtually the same way so ndf_parse uses the same class (model.List) to implement both, the only thing that makes the difference is the model.List.is_root attribute. If it’s True then it will act like a source root. It will print with newlines instead of commas and will expect ndf code arguments to insert() and add() with newlines as statements separators. If is_root is False then it will act like a simple list and will expect ndf code arguments to have commas as separators. Examples:

>>> import ndf_parse as ndf
>>> md = ndf.model
>>> lst = md.List(is_root=False)  # initialize as a simple list
>>> lst.add("1, 2, 3")
[List[0](value='1', visibility=None, namespace=None),
List[1](value='2', visibility=None, namespace=None),
List[2](value='3', visibility=None, namespace=None)]
>>> ndf.printer.print(lst)
[1, 2, 3]
>>> lst.is_root = True  # switch it to behave like a source root
>>> lst.add("""
... 4
... Obj is 5
... Obj is Type()
... """)
[List[3](value='4', visibility=None, namespace=None),
List[4](value='5', visibility=None, namespace='Obj'),
List[5](value=Object[], visibility=None, namespace='Obj')]
>>> ndf.printer.print(lst)
1
2
3
4
Obj is 5
Obj is Type()

Path Relativeness#

By default python interprets relative paths relative to where the program was started. If for example you have your script in C:\Users\User\Scripts\mod.py but run your terminal from C:\, your script will interpret all relative paths relative to C:\. If you want your script to always interpert paths relative to itself, you can add these 2 lines at the beginning of your srcipt:

import os
os.chdir(os.path.dirname(__file__))

This rule however is not applicable to Mod.edit(), Mod.parse_src() and Mod.parse_dst(). These methods operate relative to Mod.mod_src (for Mod.edit() and Mod.parse_src()) and Mod.mod_dst (for Mod.parse_dst() and Mod.write_edit(), just make sure you generated some data there before trying to access it).

Typing Ambiguity#

Ndf manual is not very clear on it’s typing annotation rules. Consider the following example:

Ndf Code#
MemberTyping is TObject(  // object is of type TObject
  // member is of type string
  StringMember : string = "Some text"

  // case similar to one in the manual, we have both member and namespace names
  ObjectMember = InnerNamespace is TObject( ... )

  // syntax allows for this in theory
  WtfMember : MType = CanWeDoThis is TObject( ... )
  AnotherOne : RGBA = RGBA[0, 0, 0, 1]
)

Since there are no clear instructions on wheter this is possible and syntax rules don’t seem to prohibit such declaration, I had to opt for a cursed solution - model.Template, model.Object and model.List have a type parameter that stores their mandatory type declaration (TObject in this example). So for these specific objects don’t rely on row’s type parameter. For everything else Row’s type is the way to go.

No Referencing#

model.abc.List, model.abc.Row and their subclasses are implemented with no copy by reference in mind. This is done to prevent unexpected side effects when editing data (accidentally mutating the row you wanted to copy). An example to illustrate the issue:

>>> # this is not an ``ndf_parse`` code, these are builtin python types
>>> data_source = {'name': 'my_variable', 'value': '12'}
>>> scene = []
>>> scene.append(data_source)  # we want to copy our data
>>> scene.append(data_source)  # we want to make another copy and edit it
>>> scene[1]['value'] = '24'  # edit the second item in the list
>>> scene[1]  # check the edit
{'name': 'my_variable', 'value': '24'}
>>> # all good, value is edited
>>> scene[0]  # check that the first item is still 12
{'name': 'my_variable', 'value': '24'}
>>> # what?.. both values have changed
>>> data_source  # check the original dict just in case
{'name': 'my_variable', 'value': '24'}
>>> # !!! all 3 have changed !!!

This happens because by default mutable objects (which are the vast majority in python) are passed by reference. So data_source and both scene entries reference the same place in memory. We can easily fix it by importing the copy module and appending like scene.append(copy.deepcopy(data_source)) but that would become very verbose very fast. So ndf_parse is implemented with deep copying on assignment (to it’s lists) by default. Whenever a model.abc.Row is inserted into a List, it always makes a deep copy of itself (only exception is if it was a dangling row, i.e. it had no parent list previously). An example to illustrate the implementation:

>>> # this is an ``ndf_parse`` code with it's types
>>> import ndf_parse as ndf
>>> md = ndf.model  # a simple alias to save on typing
>>> scene = md.List(is_root=True)  # make a scene
>>> row = md.ListRow(value='12', namespace='Var1')  # create a dangling row
>>> row  # check it
ListRow[DANGLING](value='12', visibility=None, namespace='Var1')
>>> scene.add(row)  # add 1st row, will get attached because is dangling
List[0](value='12', visibility=None, namespace='Var1')
>>> row is scene[0]  # our row is now attached to the scene
True
>>> scene.add(row)  # add 2nd row, will copy because already has a parent
List[1](value='12', visibility=None, namespace='Var1')
>>> row is scene[1]  # make sure second insert is a new object
False
>>> scene[1].v = '24'  # edit the second row
>>> scene[1]  # check the edit
List[1](value='24', visibility=None, namespace='Var1')
>>> scene[0]  # check the original
List[0](value='12', visibility=None, namespace='Var1')
>>> # all good, value is still 12
>>> # if you want your ``row`` variable to point to the last inserted row
>>> # for ease of editing, then do this:
>>> row = scene.add(row)  # ``add()`` will return the inserted row
>>> row is scene[2]  # row now refers to the last insertion
True
>>> row is scene[0]  # and no longer refers to the first insertion
False

Strict Attributes in Edits#

By default methods like edit(), add() and insert() operate in a strict mode. This means that they don’t allow to pass in parameters that aren’t supported by the Row/List type. This limitation is purely “cosmetic” and servers one purpose: to help one catch errors early. If one deconstructs an incompatible dict into an add() function, it will raise and exception to warn about the error. By setting _strict attribute to False one can put any attributes in the call, they will be simply ignored by the function. An example of how it works and when one might want to override the default behaviour:

>>> import ndf_parse as ndf
>>> source = ndf.convert("export SomeObj is Obj(a = 12)")
>>> source
List[List[0](value=Object[Object[0](value='12', member='a', type=None,
visibility=None, namespace=None)], visibility='export', namespace='SomeObj')]
>>> source.add(n="PI", v="12", test="42")
Traceback (most recent call last):
    ...
TypeError: Cannot set ListRow.test, attribute does not exist.
>>> source.add(n="PI", v="12", test="42", _strict=False)  # disable strict
List[1](value='12', visibility=None, namespace='PI')
>>> # note there is no "test" attribute, it was dropped
>>> my_param = {"n": "MyValue", "vis": "export", "v": "12",
... "description": "Something to use in script's debug output."}
>>> source.add(**my_param, _strict=False)  # disabling strict mode because
>>> # we are sure we won't break the list with our description
List[2](value='12', visibility='export', namespace='MyValue')

Comparing Strings#

This tool cuts corners by representing anything that is not a fundamental structural element (any list-like) as a string. That includes ref paths (even though the underlying parser knows how to parse those), expressions, ints floats and strings too. To distinguish, say, an int 12 from a string "12" we embed quotes themselves into python’s strings. So if we have an ndf code that looks like this:

Ndf Code#
SomeInt is 12 // This is an int
SomeString is "12" // this is a string, not an int
IllMakeYouSuffer is '12' // string again but with single quotes

We would get the following items in python:

ListRow(value='12', visibility=None, namespace='SomeInt')
#              v  v note embedded quotes in `value`
ListRow(value='"12"', visibility=None, namespace='SomeString')
#              v  v note embedded quotes but inverted (python adapts to the content)
ListRow(value="'12'", visibility=None, namespace='IllMakeYouSuffer')

The caveat is that even though SomeString and IllMakeYouSuffer are logically the same thing, in python '"12"' != "'12'" ! It would be nice to account for that in Row.compare() method but for now it’s not there because 1) time 2) edge cases. So you should be careful when comparing such stuff. Since you know the context when working with specific data, you can either keep track of which quotes you are using or strip them before comparing stuff. First method is preferrable because you can’t embed stripping inside existing methods like aforementioned Row.compare().