Development Model

There are two main development models for using more advanced web tools with CRM.

  • Traditional model of uploading webresources from your local computer as they are edited.

  • Hot reload, as you save resources, they are instantly reflected int he UI.

Once development is complete, you need to run a production build process to create resources that are optimized for use, for example minified. If you have uploaded any .js.map files you may wish to remove them as they grow quite large and may or may not be appropriate for your production environment.

Generally, when developing UI solutions for CRM, you have a mixed environment for your web resources. You'll have raw javascript files that have not been bundled or transpiled e.g. from Typescript to javascript. You'll also have newer resources that have undergone substantial pre-processing e.g. webpack bundling.

Fortunately, you can create different projects for your web resource projects or you can create a single folder that houses everything together. As long as your development colleagues coordinate on using the same publisher prefix (if desired) or resuseable resources (e.g. a standard jquery.min.js file) structure in similar file hierarchies, then you will be fine.

Entry Points

Dynamics web resources require entry points. Entry points are always .html files that you specify in the web resource "insert" dialog. When using react and webpack bundlers, you will need to take care that you expose your module to the outside world properly and make it callable from the outside world of your .html file. Your html can look like the following:

<html>
    <head>
        <script src="../../ClientGlobalContext.js.aspx"></script>
        <script src="some_not_bundled_webapi.js"></script>
    </head>
    <body>
        <div id="container"><div>
        <!-- load as late as possible -->
        <script src="ReactBasedView.js"></script>
        <!-- you only need this if your ReactBasedView.js does not self-load on window.load -->
        <script>
         window.addEventListener("load", init);
         function init() {
             console.log("Initializing ReactBasedView");
             ReactBasedView.run();
         }
        </script>
    </body>
</html>

A few notes:

  • It is still generally a good idea to use the same module across web resources even when they are not using react. In this case, webapi.js is a CRM web api module that only loads properly and does not bundle properly into webpack. We just need to load it in the web resource .html file.

  • ReactBasedView.js is a webpack bundled react-based application entry point. It has been built using "library" output with a library name of ReactBasedView. The output file was specifed as ReactBasedView.js.

  • ClientGlobalContext.js.aspx is the standard context "addition." You will see 2 levels of dots whereas most examples only give you one level of dots "../Client..". There are two levels here because perhaps we have placed our outputs into a folder at "publisher_/ui/".

  • webapi.js is a general JS library loaded outside the bundle but may or may not be used by your bundle. Some libraries are not well behaved and must be included directly in the entry point.

We used window.addEventListener which is generally safe but you can use document.addEventListener("DOMContentLoaded"). It really depends on your dependencies in ReactBasedVew.js and how you connect react up to its container.

Given this entry point we need to ensure that our ReactBasedView.jsx file is properly structured to have a "run" method that can be called. You could have webpack expose an object, a class, or whatever, but exposing a "run" method seems easy. You can think of it is as a "main()" method.

In ReactBasedView,jsx, we need some boilerplate:

import React from "react"
import { render } from "react-dom"
import ReactBasedView from "./ReactBasedview.jsx"
import NewEntityView from "./NewEntityView.jsx"

export function run(el, contactId, userId) {
  getXrm().then(xrm => {
    console.log("Running ReactBasedview");
    const inits = {
        xrm,
        contactId: contactId || xrm.Page.data.entity.getId(),
        userId: userId || xrm.Page.context.getUserId()
    };

    let root = null;
    const noContactIdView = 
     <NewEntityView message="This content can be displayed once you save the new Contact."/>;
    const view = <ReactBasedView {...inits}
                                     noContactIdView={noContactIdView}
                                     ref={(ref) => root = ref}/>;
    let cancellable = null;
    const runAfterSave = ctx => {
        const keepChecking = () = {
            var maybeNewContactId = xrm.Page.data.entity.getId();
            if(maybeNewContactId && !root.getContactId()) {
                root.setContactId(maybeNewContactId);
                clearInterval(cancellable);
                xrm.Page.data.entity.removeOnSave(runAfterSave);
            }
        };
        cancellable = setInterval(keepChecking, 500);
    };

    xrm.Page.data.entity.addOnSave(runAfterSave);
    render(view, el);
   })
}

You also see that this entry point handles a Contact entity for either a new or existing Contact. We have written ReactBasedView with an external method, setContactId that updates the state internally. NewEntityView is shown by ReactBasedView if a contactid is not found. The model above can still be abstracted further using function composition.

For example, the runAfterSave mess is there to wait until a contactid is available to pass to ReactBasedView. There other approaches to wait until after a new entity has been saved, but they require adding known fields, such as changedon, to a form and hence are a burden to use and prone to breakage. The above approach is portable although it uses polling. The key is to ensure that you inject a function into the execution queue after your "save handler" has been called since the "save handler" is called before the actual save executes.

Here's the shorter more clear higher order function:

/**
 * After a save event, run actionToTake if ready returns true. Uses polling.
 * @param xrm Xrm to attach to Xrm.Page.data.entity.addOnSave/removeOnSave
 * @param ready Return true if the condition to run actionToTake has been met.
 * @param actionToTake The action to take.
 */
function runAfterSave(xrm, ready, actionToTake, pollInterval) {
    let cancellable  = null;
    const onSaveHandler= (ctx) => {
        const keepChecking = () => {
            if(ready()) {
                xrm.Page.data.entity.removeOnSave(onSaveHandler);
                clearInterval(cancellable);
                actionToTake();
            }
        }
        cancellable = setInterval(keepChecking, pollInterval);
    }
    xrm.Page.data.entity.addOnSave(onSaveHandler);
}

which we can then call using:

    runAfterSave(_xrm,
                 () => _xrm.Page.data.entity.getId() && !root.getContactId(),
                 () => root.setContactId(_xrm.Page.data.entity.getId()),
                 500);

We needed to add a react ref callback in order to get the actual component instance which we call "root" here. If we do not, we do not have an object on which we can check for or set the contactid. The contactid acts as our "state" to determine if our ReactBasedView is new or whether it was opened on an existing Contact entity.

You could also use recompose to compose and create a new component that shows the NewEntityView based on some condition. NewEntityView in this case just shows a message inside a div.

The use of runAfterSave is interesting but not very react. The state of the Dynamics application is actually kept in some "context" state associated with the form. Hence, we really need a way to push down props into our component based on changes in this state. This is not unlike state kept in redux except you do not have control over it.

A better approach is covered in another section but it allows you to do all the above but with:

function run(el) {
  getXrmP().then(xrm => {

  render(
      <EntityForm xrm={xrm}>
        <MyComponentToDisplayInAFormView />
      </EntityForm>
  , el)
  })
}

This is much simpler. Unlike the setContactId external setting, everything is passed through props or context and you can determine when a "new" form has been saved because an entity id suddenly appears in your props. You have lots of choices.

Assuming MyComponentToDisplayInAForm handles a change in say, entityName and entityId passed in from EntityForm, then your component automatically handles new/existing entities. See here for a full implementation of EntityForm.

We will cover much more advanced scenarios for your entry point that are quite sophisticated and that allow you to inject your WebResource into the parent of the iframe, a much better place to inject into since dynamics web forms in the traditional web interface are quite awful and use tables for layout--this causes problems at times. The UCI interface is much better of course.

Config

If you are building a component, you will want to have flexible configuration. This means a few different ways to config: 1 Config record in an entity that is designed to hold config data. You would need to dynamically pull this record in run above. 2 Config record in webresources e.g. viewConfig.js. You would need to dynamically pull this in. 3 Config record in webresources parameters e.g. the data HTTP query parameters that you can add in that little tiny box when adding a webresource to a form. 4 Config object in the HTML entry point that passes a config to the run method.

Generally, you should allow at least the following ways: 3, 4. Why? It's easy.

Web Resource data property

If you type a config object into that little box in the Dynamics form editor, you can type in raw json. Notice the quotes around property names. It's required because this is not javascript directly, but json format that will be parsed by JSON.parse. The content you type into the form editor is indexed by data.

{"viewProps": {"style": {"height": 300}}}

We can use a general viewProps key to indicate that these are view config properties. In the .jsx file, inside run() we can do:

const data = myutils.getURLParameters("data") // or use the Xrm getQueryParameters
const params = JSON.parse(data) || {}
const props = Object.assign({}, params.viewProps) // add more here in precedence order
...
render(
  <TopLevelComponent
    className={props.className}
    style={ props.style} 
  />
, target
)

Obviously, your TopLevelComponent must pick out className or style from the props and apply them correctly, in the right precedence order:

const TopLevelComponent = ({className, style, ...rest}) => {
 return(
   <div className={className} style={style}>
   ...
   </div>
 )
}

Generally, the precedence order for initialization should be:

  • Some dynamics entity that holds a JSON string for configuration.

  • Props passed to a React element that you type in

  • Props passed into the run method (which come from your entry points)

  • Props passed in through query string parameters via data/viewProps

HTML Entry Point properties

You can also pass in parameters via run.

export function run({entityId, viewProps, ...rest}) {
 ...
 const props = Object.assign({}, viewPropsFromData, viewProps) // precedence set by the order
 ...
}

Where's React?

You can include react in each WebResource that you deploy. This is the safest approach. However, if you have a lot of components in WebResources that will be loaded into the same form, you can load React into the top level form under form properties in the form editor and access it in your iframe using window.parent.parent.React and window.parent.parent.ReactDOM. You need 2 parent links because the form scripts are loaded into top.contentIFrame0.customScriptsFrame.

You can define an alias in webpack to make this easier. However, once you do, you will need to separate your webpack configurations into those that use this approach to react vs bundling it into your module directly.

const React = window.parent.parent.React
const ReactDOM = widow.parent.parent.ReactDOM

Note that some version of Dynamics online include react 15.4.2 already so you only need to add the libraries to your form library if Dynamics does not already load it. Of course, this is completely unsupported and may not work at all, so use webpack bundling. After all, bundle size for react is dropping in v16.

Babel Polyfills

Babel has two primary ways to polyfill. You need to use the babel runtime polyfill approach. This approach does not access a global polyfill and hence you do not need to include the babel polyfill "once" in your toplevel application. The library version references a polyfill module that is easier to slice and dice to suit your needs and appropriate for WebResources. You enable this in your .babelrc using:

"plugins": [
        "transform-runtime",
         ...
]

The module babel-runtime is automatically included in your webpack build after you have installed the babel-plugin-transform-runtime into your package.json.

My .babelrc:

{
 // this by itself does not make babel emit source maps, need webpack or cli parameters
 // I don't use inline sourcemaps...otherwise the below could be true=>"inline"
    "sourceMaps": true,
    "presets": [
        [
            "env",
            {
                "targets": {
                    "browsers": [
                        "last 4 Chrome versions"
                    ]
                }
            }
        ],
        "react",
        "stage-2"
    ],
    "plugins": [
        "transform-runtime",
        "transform-decorators",
        "transform-object-rest-spread",
        "transform-es2015-modules-commonjs",
        "dynamic-import-node",
    ]
}

The use of transform-runtime is like using tslib in typescript.

Typescript

Just use typescript (or flow) as much as possible (or flow). It will save you hours of headaches.

I use typescript followed by babel in my loaders.

Why?

I use typescript target esnext then use babel polyfills to fill in what's missing. My .babelrc is always explicit on the browser versions as well e.g. last two chrome versions.

{
        "compilerOptions": {
                "jsx": "preserve",
                "strictNullChecks": true,
                "allowJs": true,
                "importHelpers": true,
                "noEmitHelpers": true,
                "experimentalDecorators": true,
                "emitDecoratorMetadata": false,
                "allowSyntheticDefaultImports": true,
                "inlineSourceMap": false,
                "sourceMap": true,
                "noEmitOnError": false,
                "traceResolution": true,
                "preserveSymlinks": false,
                "moduleResolution": "node",
                "target": "esnext",
                "module": "commonjs",
                "baseUrl": ".",
                "paths": {
                        "BuildSettings": [
                                "./src/BuildSettings.development"
                        ],
                    "es6-promise-pool": [
                        "./typings/es6-promise-pool"
                    ]
                }
        },
        "exclude": [
                "node_modules",
                "dist",
                "**/*.spec.ts",
                "**/*.test.ts",
                "**/*.spec.js",
                "**/*.test.js"
        ],
        "include": [
                "src/**/*"
        ]
}

You see that we keep strict null checks, that means we need to explicitly model null when its allowed. "module => commonjs" implies that typescript emits commonjs module syntax, e.g. var x = require(.) which is fine since we transpile to our target javascript level using babel after typescript runs. And webpack can understand either so commonjs is a good choice. You also see that I needed to fix the es6-promise-pool (which you may or may not use) to pick the correct typings as the published are in error. You cannot just place those into a "./typings" folder since the "rootTypings" tsconfig.json is not relevant when resolving imports, its really only used for global declarations.

Also note that we push through sourcemaps as separate files. Assuming webpack's devtool is set, the babel-loader will also generate source maps as separate files (since .babelrc sourceMap is true and not "inline").

We use the path to map an import to a specific BuildSettings module. BuildSettings holds our build constants. There's not many:

// ./src/BuildSettings.development.ts
export const API_POSTFIX: string = "/api/data/v9.0/"
export const DEBUG: boolean = true
export const BUILD: string = "DEV"
export const CLIENT: string = "UNIFIED"

There's another for production. Notice that we just put them into the src directory then remap the import to the correct one in both tsconfig and webpack. There is a plugin that would sync both tsconfig.json and webpack.config.js together.

My webpack loader for this is:

{
                test: /\.tsx$|\.ts$/,
                include: [paths.srcdir, /BuildSettings/],
                exclude: [/node_modules/, /__tests__/],
                use: [
                    { loader: "babel-loader" },
                    {
                        loader: 'ts-loader',
                        options: {
                            compilerOptions: {
                                paths: {
                                    // replace the default in tsconfig.json
                                    "BuildSettings": [buildSettings],
                                    "es6-promise-pool": ["./typings/es6-promise-pool"],
                                }
                            }
                        }
                    }
                ]
            },

We pass in buildSetting into the config function (versus statically declaring the config) so that we can change it out between dev and prod. In general, you should use functions to parameterize your webpack config files. I have very little static config declarations.

Babel can be fine-tuned around fills so you using it ensures that any esnext assumptions that typescript has assumed are filled in. By placing ts-loader last, it runs first in the order of loaders based on how webpack works.

I also include tslib so that any fills that typescript adds to each module e.g. around async generators, reference a common lib instead of repeating the code in each module. This is much like the use of the babel polyfill for libraries (the transform-runtime babel plugin). importHelpers sticks a require("tslib") on each tsc compiled module and noEmitHelpers means that each extensions from typescript references tslib.

Build Settings in your code

You saw how we used BuildSettings for our compile time injection of constants. I also mentioned that you should use webpack config functions more not static declarations:

...
function common(buildSettings) { 
  return {
   ...
  }
}
...
module.exports = function(env) {
  let buildSettings = "./src/BuildSettings.development"
  if(env.BUILD_KIND === "production") buildSettings = "./src/BuildSettings.production"
  switch(env) {
    case "prod":
      return merge(common(buildSettings), main, prod)
     default:
     return merge(common(buildSettings), main, dev)
  }
}

This is one way to handle config variables. The other way is of course to store them in a json file then alias the import in webpack. The reason you may want to do this instead of using DefinePlugin is because you may use other build tools that do not integrate well with webpack and hence would not have access to the webpack approach of inserting DefinePlugin vars directly into each module. Testing frameworks comes to mind.

In your source files import { DEBUG } from "BuildSettings".

As before, when you use Uglify or GCC, constant if() blocks will be removed from the code e.g. if(DEBUG).

For webpack, use --env.BULID_KIND to set the prod flag while still allowing other flags to be set via the --env webpack CLI parameter. Note that many people use process.env.NODE_ENV==="production" to test for production builds to turn large amounts of code blocks on or off depending on the prod or dev nature of the build. In webpack, this is automatically set when using the -p CLI parameter. However, don't use the -p parameter as it does not allow you to override the Uglify plugin. Instead, setup your own variables. You can be quite sophisticated so that you can set environment variables, set them on the command line or hold them in a build config file.

For me, when BUILD_KIND === "production" I also use DefinePlugin to set process.env.NODE_ENV to "production" correctly so that it can also be used since it's quite common and some dependent libraries may depend on that model. However, note that when webpack processes your files it is not running them in the node environment so you use whatever you want and emulate the rather confusing behavior with process.env.NODE_ENV.

Testing Frameworks

Don't forget testing. I use it extensively especially for non-UI parts of my code that deal with data or complicated logic e.g. query builders.

The javascript world splits apart the testing tools into:

  • Runners

  • Frameworks

  • Matchers

  • Reporters

  • ...more stuff split out in js land...

You can mix and match and some tools cover more than one area. Popular tools are Karma (test runner), jasmine and mocha (frameworks, jest), chai (matchers).

Almost all frameworks include some type of test runner but dedicated test runners like karma are often quite flexible because they are designed from the start to integrate with other environments.

Note that if you use jest, from facebook, with typescript you may want to look into ts-just which ignores webpack but does use tsconfig.json. If you use karma, karma integrates with webpack quite nicely which allows you to use webpack as a build tool more easily than other tools like jest.

Documentation

Documentation generation is crazy in the javascript world. Sure you can use markdown files for broad documentation, but API documentation, especially for libraries that you build for Dynamics are a different matter.

jsdoc generates documentation from documentation metadata (comments), a fairly standard approach and is common with other languages.

There are also typescript specific documentation generators to add type information to your code. Type information is sometimes better than metadata documentation because you can see the author intent much more clearly. See https://github.com/xperiments/TSDoc.

Last updated