This is in relation to Dojo 2.0 discussions. While it concerns an
implementation in RequireJS, I do not assume that RequireJS will be
the module loader for Dojo 2.0. I hope it will though, and this
discussion is to make sure what RequireJS might implement would fit
with Dojo's needs.

In particular, I am trying to work out how to deal with packages. This
has special relevance when considering dojox. I think it makes sense
to start with CommonJS packages and see how they could be loaded in
the browser. I made an outline of what I think that means for
RequireJS here:

http://github.com/jrburke/requirejs/blob/master/docs/design/packages.md

Please feel free to comment on this thread, or in the RequireJS group's thread:
http://groups.google.com/group/requirejs/browse_thread/thread/e724016dc8eb8860

James

Search Discussions

  • Kris Zyp at Aug 17, 2010 at 1:27 pm
    I believe that requiring a dependency on a server side component
    (package manager) for first class development is probably non-starter
    for Dojo, and doesn't seem like a good idea for RequireJS either. I
    believe that the current approach (both in Dojo and RequireJS) of
    allowing developers to start developing without any server side
    executable, and then allowing them to later optimize for a improved
    performance is ideal. Developers should be able to start development
    with just a web server and have it Just Work without any server side effort.

    I would recommend the following approach that provides a very simple,
    easy to understand handling of package, consistent behavior with
    traditional directory structures, and flexibility for user options for
    dealing with packages. Lets say we have modules that we want to use from
    a package accessible at http://some-site.com/foo/ (the root of the
    package) and it has a dependency on http://dojotoolkit.org/dojox/json/.
    We want to use this package and refer to it as "foo", so that we can
    load it's bar module with "foo/bar" (like dojo.require("foo.bar") or
    require.def(["foo/bar"],function(bar){...}); The user would have the
    following options:

    a. Client side RequireJS/loader package.json support - RequireJS
    provides a registerMethod that will register a mapping by taking a URL
    to package.json or given a mapping with exactly the same format as
    package mappings (no new packagePaths format to learn), and recursively
    retrieve the package.json for the target packages to transitively
    include all appropriate register all module paths.
    require.registerMapping({
    "foo": "http://some-site.com/foo/"
    });
    This would effectively map the "foo" path to "http://some-site.com/foo/"
    and the "json" path to "http://dojotoolkit.org/dojox/json/" (which it
    would discover from the mappings in foo's package.json).

    No download to the server or server side execution is required for
    option a, this relies purely on client cross-domain loading capabilities
    (which are trivial with RequireJS modules), allows packages to be
    utilized with nothing more than a single mapping declaration (without
    worry of dependencies).

    b. Use require.registerPath or require = {paths:{...}} for manual
    configuration of the paths - Here we have to specify all the paths
    (dependencies are not automatically registered for us):
    require = {paths:{
    "foo": "http://some-site.com/foo/lib/",
    "json": "http://dojotoolkit.org/dojox/json/lib/"
    }}

    This accomplishes the same configuration as option a, but has the
    advantage of eliminating the request the package.json for dependent
    packages since we explicitly specifying all the paths. No download to
    the server or server side execution is required for option b either, but
    this has the disadvantage of needing to manually specify all
    transitively required paths ourselves.

    c. Manually setup our paths on the directory structure on the server -
    Here we download the archive for http://some-site.com/foo/ and
    http://dojotoolkit.org/dojox/json/ and copy the contents of their lib
    directories into /my-js-directory/foo/ and /my-js-directory/json/
    respectively (note that we don't extract the entirety of the package
    from the root, just the lib into the target directory). Now our server
    directory structure matches our expected client side hierarchy. No
    client side path or dependency registration is require now, since the
    server URLs match our expected package mappings. We still don't require
    any server side execution or components, but it does require manual
    copying of files.

    d. Run a build - The build process would be enhanced to understand the
    package mappings (from a registerMapping or from a package.json at the
    root of the main app) and will automatically download the package and
    dependent packages (http://dojotoolkit.org/dojox/json/) and copy the lib
    directories into the target server directory accomplishing exactly the
    same thing as option c. Of course the build process could also do file
    concatenation and compression for layers as well.

    e. Use a package manager - Here we leverage some of the ideas from
    James' package manager. We can do something like:
    node pkg.js add http://some-site.com/foo.zip

    And the package manager will automatically download the package and
    dependent packages (http://dojotoolkit.org/dojox/json/) just like the
    build does (option d), but in a more command-line directed fashion (but
    still quite understandable).

    f. Use Nodules + Transporter - If you are using SSJS this has the
    advantage of avoiding any extra server side or client side
    configuration. Applications can utilize a package.json for package
    mapping on both the client and server side, and the server side
    automatically maps the URLs to appropriate packages based on these
    mappings. This avoids the need for any package management steps
    altogether. From the client side this looks the same as c-e (the URLs
    correspond directly to the module ids used in the require calls), but
    the URLs are translated to real packages (without requiring a
    denormalized directory structure).

    Remember these options all accomplish the goal of being able to do
    dojo.require("foo.bar") or require.def(["foo/bar"],function(bar){...});
    in different ways. More importantly we can switch between options
    without affecting our source code. We can start with client side loading
    dependencies. Later download the dependencies. Or later do a build. And
    our code that relies on the modules can remain unchanged.

    One important thing to point out this what are not introducing any
    incompatibility with the existing Dojo directory structure. DojoX can
    rightly be broken down into a set of individual packages that would each
    follow the standard CommonJS package layout (with a lib directory and
    package.json) for individual download, but the current directory
    structure of DojoX is simply the post-build/denormalized structure of
    the combination of all of the DojoX packages extracted and mapped using
    the expected package alias. Consequently we could say that option g is
    to download a denormalized set of packages like DojoX.

    Also as far as the use of "dependencies" property in package.json, this
    is specified one way in the original package spec, has a funky version
    range object in NPM, and is required to be an array in Narwhal (throws
    on an object). The "dependencies" property is intractably incompatible
    and a hopelessly lost cause. I would stay as far as way from that as I
    could.

    Anyway, once again, this is very straightforward flexible approach with
    the very minimal addition to Dojo or RequireJS of a registerMapping
    function. This function simply registers paths and downloads target
    package.json to recursively register target paths. This should be
    implementable in just a few lines of code since both Dojo and RequireJS
    already provide means for registering paths. Package mappings does allow
    us to possibly get more advanced and only download target package.json
    files if the prefix is actually used in required module (it is entirely
    possible to have a package with a variety of modules and use a subset
    that does not depend on all the mappings), but that wouldn't be necessary.

    Thanks,
    Kris
    On 8/17/2010 12:23 AM, James Burke wrote:
    This is in relation to Dojo 2.0 discussions. While it concerns an
    implementation in RequireJS, I do not assume that RequireJS will be
    the module loader for Dojo 2.0. I hope it will though, and this
    discussion is to make sure what RequireJS might implement would fit
    with Dojo's needs.

    In particular, I am trying to work out how to deal with packages. This
    has special relevance when considering dojox. I think it makes sense
    to start with CommonJS packages and see how they could be loaded in
    the browser. I made an outline of what I think that means for
    RequireJS here:

    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.md

    Please feel free to comment on this thread, or in the RequireJS group's thread:
    http://groups.google.com/group/requirejs/browse_thread/thread/e724016dc8eb8860

    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
  • James Burke at Aug 17, 2010 at 2:32 pm

    On Tue, Aug 17, 2010 at 10:27 AM, Kris Zyp wrote:
    I believe that requiring a dependency on a server side component
    (package manager) for first class development is probably non-starter
    for Dojo, and doesn't seem like a good idea for RequireJS either. I
    believe that the current approach (both in Dojo and RequireJS) of
    allowing developers to start developing without any server side
    executable, and then allowing them to later optimize for a improved
    performance is ideal. Developers should be able to start development
    with just a web server and have it Just Work without any server side effort.
    I think maybe we have a disconnect. The pkg.js script is something the
    developer runs on their box to just automate setting up their app
    config, and to also download a package to their local project. It is
    not something that runs on a server or is used during runtime as part
    of serving the web app.

    I expect a great number of packages will be available from a web site
    to download to include in projects, but they do not want to become
    CDNs, serving modules to all end users' browsers that use an app that
    happened to depend on their package.

    Similarly, I think it hurts runtime performance to have to fetch
    remote package.json files, parse them, then configure the paths for
    every page load in the browser.

    So the goal in the design doc was to illustrate the configuration in
    RequireJS that could be coded manually (the "packagePaths" or
    "packages" properties), if the developer decides to download a package
    manually, but to also spec out a command line tool (pkg.js) that the
    developer could use to automate that tedious work of "download
    package, configure its path".

    There is an option in the config so that the packagePaths/packages
    could refer to a remote module on a CDN/remote host, but it does not
    require parsing the package.json on the remote host for every page
    load, it is a config that should be set up once by the developer
    during development time/setup.

    Maybe we can discuss this issue first, before getting to the rest of
    your response, because it seems like a fundamental design choice. It
    seems like you prefer the approach where module loader in the browser
    figures it out based on package.json contents, but it seems like that
    entails more network calls and still requires a developer to configure
    the location of the package.json file. If the developer has to do some
    configuration, might as well just configure the final package path.
    Hopefully most packages will not deviate from the "lib" and "main.js"
    norms so that the configuration will be terse/shorter than a path to a
    package.json.

    BTW, I used "dependencies" in the sample config, but I am not sure how
    it interacts with "mappings". Basically, I want some structure that
    says "package is called X, it can be downloaded here". I do not mind
    supporting a variant that says "I want version Y of package X", and
    then a server is used to look up the mapping for that, but to me that
    is an optional thing, something to help ease the pain of setup.
    However using URLs to locate and download a package is the baseline
    support I prefer. So whatever structure works best for that, I am open
    to it.

    James
  • Dustin Machi at Aug 17, 2010 at 3:16 pm
    inline.
    I think maybe we have a disconnect. The pkg.js script is something the
    developer runs on their box to just automate setting up their app
    config, and to also download a package to their local project. It is
    not something that runs on a server or is used during runtime as part
    of serving the web app.

    I expect a great number of packages will be available from a web site
    to download to include in projects, but they do not want to become
    CDNs, serving modules to all end users' browsers that use an app that
    happened to depend on their package.
    As do understand its not required for runtime use, and would only be used to setup the local environment. I don't have a problem with a utility to do this, some people (sfoster for example) have already worked towards various parts of this utility. However, I object to it being a required element. One should be able to just download and decompress the packages or include them as an svn external/git subproject directly. They shouldn't need an additional transformation.

    As I understood Kris' proposal, the package.json would basically be retrieved, then it registers, and then subsequent requests would know how to retrieve resources from within that package. In a build environment, this information would be packaged up so it could be in a larger build or even a build of just a specific package could simply be built into package.json meaning the extra requests would only be there in a non-build environment. The extra request itself could be avoid by individual by providing a package mapping on their own, and so modules with a package mapping wouldn't be required to retrieve the package.json.
    Similarly, I think it hurts runtime performance to have to fetch
    remote package.json files, parse them, then configure the paths for
    every page load in the browser.

    So the goal in the design doc was to illustrate the configuration in
    RequireJS that could be coded manually (the "packagePaths" or
    "packages" properties), if the developer decides to download a package
    manually, but to also spec out a command line tool (pkg.js) that the
    developer could use to automate that tedious work of "download
    package, configure its path".

    There is an option in the config so that the packagePaths/packages
    could refer to a remote module on a CDN/remote host, but it does not
    require parsing the package.json on the remote host for every page
    load, it is a config that should be set up once by the developer
    during development time/setup.
    It doesn't seem like these are mutually exclusive as above. It simply means that if no mapping is available, retrieve it by loading the package.json. In that regard an application could simply create or generate this mapping so that even in development it need not be an issue.
    Maybe we can discuss this issue first, before getting to the rest of
    your response, because it seems like a fundamental design choice. It
    seems like you prefer the approach where module loader in the browser
    figures it out based on package.json contents, but it seems like that
    entails more network calls and still requires a developer to configure
    the location of the package.json file. If the developer has to do some
    configuration, might as well just configure the final package path.
    Hopefully most packages will not deviate from the "lib" and "main.js"
    norms so that the configuration will be terse/shorter than a path to a
    package.json.
    It gets defined anyway, so it seems that duplicating that information is basically what forces the need for the pkg tool. Since we can bypass the package.json altogether, or even a specific mapping by telling it to assume the common package layout, it seems like this is a good approach.

    Dustin
  • James Burke at Aug 23, 2010 at 9:03 pm
    Responding to Kris and Dustin's comments, but summarizing:

    1) Do not require command-line tool:
    In the context of a Dojo 2.0 I think this is fine to deliver the
    command line package tool with the loader. Most if not all users will
    want to use it. Trying to download a zip manually, unzip it in the
    right place then do the app config will just be too tedious. However,
    the steps the command line tool does will be documented if people want
    to do it themselves. However I expect almost no one to actually
    manually do the steps.

    2) Allow parsing package.json files in the browser as part of a running web app:
    There are too many caveats with this approach to make desirable: the
    server needs to allow xdomain use, or the server has to offer some
    xdomain-safe equivalent, the modules have to be in the module format
    expected by the loader (for instance if RequireJS is used, then the
    modules cannot be plain CommonJS modules), and the package host needs
    to be comfortable being a mini-CDN. If the command-line package tool
    is available with the loader, then it is just simpler to always
    instruct the developer to use that. To me, the on-the-fly in-browser
    package.json parsing's benefit is out-weighed by the number of rough
    edges and documentation/communication it requires to use it
    effectively.

    James
  • Kris Zyp at Aug 23, 2010 at 10:05 pm

    On 8/23/2010 7:03 PM, James Burke wrote:
    Responding to Kris and Dustin's comments, but summarizing:

    1) Do not require command-line tool:
    In the context of a Dojo 2.0 I think this is fine to deliver the
    command line package tool with the loader. Most if not all users will
    want to use it. Trying to download a zip manually, unzip it in the
    right place then do the app config will just be too tedious. However,
    the steps the command line tool does will be documented if people want
    to do it themselves. However I expect almost no one to actually
    manually do the steps.
    I think this might be somewhat of an underestimation of developers
    prowess at being able to unzip and copy files. With the frequency of
    difficulty with getting command line tools to work properly, I wouldn't
    be surprised to see a lot of user manually unpacking packages. the world
    is never as ideal as we would hope, and manual methods always get more
    use than one would expect.
    2) Allow parsing package.json files in the browser as part of a running web app:
    There are too many caveats with this approach to make desirable: the
    server needs to allow xdomain use, or the server has to offer some
    xdomain-safe equivalent, the modules have to be in the module format
    expected by the loader (for instance if RequireJS is used, then the
    modules cannot be plain CommonJS modules)
    RequireJS doesn't support plain CommonJS modules anyway does it?
    , and the package host needs
    to be comfortable being a mini-CDN. If the command-line package tool
    is available with the loader, then it is just simpler to always
    instruct the developer to use that. To me, the on-the-fly in-browser
    package.json parsing's benefit is out-weighed by the number of rough
    edges and documentation/communication it requires to use it
    effectively.
    Meaning you wouldn't use it for a production app, or you don't want any
    developer to ever have the option of using for development or
    otherwise? Did you see the patch [1], its about a dozen lines of code
    and we could easily default it out of the build or move it out base,
    making the cost almost nil. I don't have any expectation that this
    would be the most commonly used or the first recommendation for
    including packages, but when you consider that this could be the
    quickest way for many developers to utilize packages (no download or
    execution required) and could have some powerful uses in certain
    situations (like in Intranets with shared libraries), I find it hard to
    see how we would doing our users a favor by denying them this option.

    [1] http://bugs.dojotoolkit.org/attachment/ticket/11584/packages.diff

    Thanks,
    Kris
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.dojotoolkit.org/pipermail/dojo-contributors/attachments/20100823/69326736/attachment.htm
  • James Burke at Aug 23, 2010 at 11:41 pm

    On Mon, Aug 23, 2010 at 7:05 PM, Kris Zyp wrote:
    RequireJS doesn't support plain CommonJS modules anyway does it?
    Correct. The CommonJS modules need to be transformed to be usable by
    RequireJS, they need to have the require.def() wrapper put around
    them. The r.js adapter for Node[1] will do this on the fly, as the
    file is loaded, but the browser implementation, the files are assumed
    to be transformed, to help control the size of the loader.

    This is one of the things that can be done by the command line tool,
    convert the module, so that conversion code is not in the
    browser-loaded loader. In particular, I believe this is an important
    advantage to have: to be able to re-use CommonJS modules in a
    RequireJS scenario without the original author needing to explicitly
    support RequireJS.

    [1] http://requirejs.org/docs/download.html#node
    , and the package host needs
    to be comfortable being a mini-CDN. If the command-line package tool
    is available with the loader, then it is just simpler to always
    instruct the developer to use that. To me, the on-the-fly in-browser
    package.json parsing's benefit is out-weighed by the number of rough
    edges and documentation/communication it requires to use it
    effectively.

    Meaning you wouldn't use it for a production app, or you don't want any
    developer to ever have the option of using for development or otherwise?
    I do not see it as worth the cost to offer at all. For it to work, the
    package's domain has to be opened up for CORS access (and assuming you
    do not want to support IE6-7, even 8, if you want just XHR), and that
    can be a complex topic. The easy way to get it to work, to allow
    requests from any domain is probably not the most secure advice to
    offer. That assumes the package developer wants to allow serving the
    package from their domain. There now needs to be an option in the
    package.json to explicitly disallow the loader from loading the code
    from the other domain. Then there are the failure cases you have to
    support in code that has to be included in the loader. It might mean a
    few custom exceptions to be thrown, but as I mentioned, the
    documentation of the possible problems and the specific situations
    where it would apply do not seem worth they benefit. Particularly
    given the steps the developer needs to go through are no different:

    Developer time package processing:
    - Download loader that has the package tool.
    - Use package tool to fetch the package and configure your app:
    pkg.js add foo http://my-site.com/foo/

    If modules are to be loaded remotely (but app config done locally):
    pkg.js addRemote foo http://my-site.com/foo/

    Runtime package processing:
    - Download loader that has the package tool.
    - open top level app config file
    - Enter the package mapping:
    foo: "http://my-site.com/foo/"
    - Save file

    It is true that the developer-time option means the developer may need
    to serve the app files from their domain but it works in more cases.
    For packages that can be served remotely the addRemote option still
    allows that, and the command line tool can give richer feedback to
    errors vs. the runtime case, unless the richer error control is
    downloaded with the loader in the runtime case.

    To me, one of the problems with Dojo is that it does offer so many
    routes and options, particularly the build tool, which is largely my
    fault. By simplifying the options, I believe it will make it easier to
    explain the toolkit to others and lead to fewer edge case errors that
    can make it seem scary to use.
    Did you see the patch [1], its about a dozen lines of code and we could
    easily default it out of the build or move it out base, making the cost
    almost nil.? I don't have any expectation that this would be the most
    commonly used or the first recommendation for including packages, but when
    you consider that this could be the quickest way for many developers to
    utilize packages (no download or execution required) and could have some
    powerful uses in certain situations (like in Intranets with shared
    libraries), I find it hard to see how we would doing our users a favor by
    denying them this option.
    I did see the patch, and as I think I commented in different thread,
    there is more to that support than just that patch. There is the build
    tool support to consider. Which would effectively mean building the
    command line package tool if you allow those package modules to be
    included in a build layer.

    Even if I am wrong on the usefulness of runtime parsing, to me the
    most basic unit that needs to work, since it applies to more cases, is
    the command-line case where the module is downloaded during
    development time, so I prefer to make sure that works well before
    pushing on the runtime case.

    James
  • Kris Zyp at Aug 24, 2010 at 9:18 am

    On 8/23/2010 9:41 PM, James Burke wrote:
    On Mon, Aug 23, 2010 at 7:05 PM, Kris Zyp wrote:
    [snip]
    , and the package host needs
    to be comfortable being a mini-CDN. If the command-line package tool
    is available with the loader, then it is just simpler to always
    instruct the developer to use that. To me, the on-the-fly in-browser
    package.json parsing's benefit is out-weighed by the number of rough
    edges and documentation/communication it requires to use it
    effectively.

    Meaning you wouldn't use it for a production app, or you don't want any
    developer to ever have the option of using for development or otherwise?
    I do not see it as worth the cost to offer at all. For it to work, the
    package's domain has to be opened up for CORS access (and assuming you
    do not want to support IE6-7, even 8, if you want just XHR), and that
    can be a complex topic.The easy way to get it to work, to allow
    requests from any domain is probably not the most secure advice to
    offer. That assumes the package developer wants to allow serving the
    package from their domain. There now needs to be an option in the
    package.json to explicitly disallow the loader from loading the code
    from the other domain.
    No, I am not suggesting any new loading mechanisms, the package format
    should follow whatever format is being used by the modules. We don't
    need CORS at all.
    Then there are the failure cases you have to
    support in code that has to be included in the loader. It might mean a
    few custom exceptions to be thrown, but as I mentioned, the
    documentation of the possible problems and the specific situations
    where it would apply do not seem worth they benefit. Particularly
    given the steps the developer needs to go through are no different:

    Developer time package processing:
    - Download loader that has the package tool.
    - Use package tool to fetch the package and configure your app:
    pkg.js add foo http://my-site.com/foo/
    Don't forget the other steps:
    * Ensuring you have the right java runtime
    * Download the right java runtime if it is not there
    * Make sure you have Java in your path variable
    * Find the correct script for your OS to execute

    (It is even more difficult with Node, if we decide to go that route)

    The build tool has consistently been a source of extra effort for
    developers. This is OK for doing builds where the developer has usually
    already invested in their app, and so some extra time to learn the build
    tool is probably not going to stop them. But if developers have to jump
    this hurdle right from the beginning before they can even start using
    extra packages it is different story and much more likely to dissuade a
    user from continuing.

    Anyway, again I am not opposed to providing a command line package tool.
    I am opposed to limiting legitimate options for how our users can use
    Dojo in the easiest way possible.
  • James Burke at Aug 24, 2010 at 2:45 pm

    On Tue, Aug 24, 2010 at 6:18 AM, Kris Zyp wrote:
    No, I am not suggesting any new loading mechanisms, the package format
    should follow whatever format is being used by the modules. We don't
    need CORS at all.
    Oh, maybe I misunderstood. I thought the idea was that you would use
    registerPackageMapping to give an URL to a package.json file. An XHR
    request is done to fetch package.json. That package.json file could be
    on another domain (in fact likely for CDN-hosted packages), so that
    would be a cross-site request, that would require the server to send
    the right CORS Access-Control headers to work.

    It also means that the XHR calls must be synchronous, and there can be
    nested sync XHR calls done for the nested dependencies. I really want
    to get away from sync calls in a loader. The text! plugin in RequireJS
    uses XHR, but they are async.

    If instead you try to work out some xdomain-friendly .js file format
    that is equivalent to package.json that can be loaded via script tags,
    now packages need to actively opt-in to the system to work. It is
    another thing the user needs to do to make a full package. Given the
    server-side skew in CommonJS, not everyone will do this. Someone will
    try to load one of those packages and the loader will need to
    communicate the error, more code in the loader.

    If you do manage to convince folks to use a package.js with a require
    callback (or switch to async XHR), now the browser loader has to build
    in the smarts to delay fetching any modules until all package.js files
    have been loaded. More logic and weight in the loader.

    These are all small edge case failures that end up looking like a mine
    field to then effectively use and support. To me, it is not worth that
    complexity cost.
    Don't forget the other steps:
    * Ensuring you have the right java runtime
    * Download the right java runtime if it is not there
    * Make sure you have Java in your path variable
    * Find the correct script for your OS to execute

    (It is even more difficult with Node, if we decide to go that route)

    The build tool has consistently been a source of extra effort for
    developers. This is OK for doing builds where the developer has usually
    already invested in their app, and so some extra time to learn the build
    tool is probably not going to stop them. But if developers have to jump
    this hurdle right from the beginning before they can even start using
    extra packages it is different story and much more likely to dissuade a
    user from continuing.
    The build tool is a step up in complexity vs a package manager,?and
    they should be separate things. I hope that the package manager will
    be able to run on Rhino or Node. Using a package manager is a useful
    tradeoff, if it means the user does not need to download a 23MB file
    to get started with Dojo, I think that is worth it. It also means they
    avoid a bunch of possible edge cases that will not work with runtime
    parsing of the package.json file. In other words, taking some
    complexity cost up front will mean a more consistent, robust
    experience with less browser-based code.
    Anyway, again I am not opposed to providing a command line package tool.
    I am opposed to limiting legitimate options for how our users can use
    Dojo in the easiest way possible.
    For simple dojo/dijit/dojox modules that do not have deep nested
    dependencies, the developer can still enter a simple lib path config
    to the remote package. I am not sure how practical it is if there are
    nested dependencies, but there is a manual, HTML/script based way to
    do the configuration. I am still not sold on the complexity cost for
    the allowing the package.json parsing on the fly.

    But I do not want to be a force for stop energy. If you are very
    motivated to do the work, and others see the value in having that
    feature in Dojo, then please proceed, but I do not want to be involved
    with those parts. At this time, I do not plan on supporting that path
    in RequireJS, but as always, that could change over time.

    James
  • Rawld Gill at Aug 19, 2010 at 2:06 pm
    Hi James,

    The package machinery you're working on looks very interesting and powerful!

    more below....
    -----Original Message-----
    From: James Burke
    Sent: Tuesday, August 17, 2010 11:33 AM

    So the goal in the design doc was to illustrate the configuration in RequireJS
    that could be coded manually (the "packagePaths" or "packages"
    properties),
    if the developer decides to download a package manually, but to also spec
    out a command line tool (pkg.js) that the developer could use to automate
    that tedious work of "download package, configure its path".

    There is an option in the config so that the packagePaths/packages could
    refer to a remote module on a CDN/remote host, but it does not require
    parsing the package.json on the remote host for every page load, it is a config
    that should be set up once by the developer during development time/setup.

    Maybe we can discuss this issue first, before getting to the rest of your
    response, because it seems like a fundamental design choice.
    I guess I'm being slow, but I'm having trouble understanding the issue. Are
    you looking for feedback on the format/semantics of the "packages" and
    "packagePath" objects passed to require as discussed in the "Add package to
    project" section of you design overview?

    Other random thoughts....

    * I noticed that you mentioned that the multi-version load feature may have
    to be changed. Has your experience with this feature in requireJS changed
    any of your opinions about it?

    * Seems like there will need to be some kind of discipline imposed on
    package names (e.g., who gets the "rights" to the package named "grid").
    Otoh, nobody wants to write "require('org.dojo.grid')". This suggests that,
    at least on the browser side, it would be good to have some kind of aliasing
    feature.

    * Similarly, a general solution will need the ability to load packages into
    their own dynamic namespace. The multi-version feature is a start...have you
    evolved your ideas here?

    Best,
    Rawld
  • Kris Zyp at Aug 23, 2010 at 6:50 pm

    On 8/19/2010 12:06 PM, Rawld Gill wrote:
    Hi James,

    The package machinery you're working on looks very interesting and powerful!

    more below....
    -----Original Message-----
    From: James Burke
    Sent: Tuesday, August 17, 2010 11:33 AM

    So the goal in the design doc was to illustrate the configuration in RequireJS
    that could be coded manually (the "packagePaths" or "packages"
    properties),
    if the developer decides to download a package manually, but to also spec
    out a command line tool (pkg.js) that the developer could use to automate
    that tedious work of "download package, configure its path".

    There is an option in the config so that the packagePaths/packages could
    refer to a remote module on a CDN/remote host, but it does not require
    parsing the package.json on the remote host for every page load, it is a config
    that should be set up once by the developer during development time/setup.

    Maybe we can discuss this issue first, before getting to the rest of your
    response, because it seems like a fundamental design choice.
    I guess I'm being slow, but I'm having trouble understanding the issue. Are
    you looking for feedback on the format/semantics of the "packages" and
    "packagePath" objects passed to require as discussed in the "Add package to
    project" section of you design overview?

    Other random thoughts....

    * I noticed that you mentioned that the multi-version load feature may have
    to be changed. Has your experience with this feature in requireJS changed
    any of your opinions about it?

    * Seems like there will need to be some kind of discipline imposed on
    package names (e.g., who gets the "rights" to the package named "grid").
    Otoh, nobody wants to write "require('org.dojo.grid')". This suggests that,
    at least on the browser side, it would be good to have some kind of aliasing
    feature.
    That's exactly the issue that package mappings [1] addresses, it reuses
    the web/URLs for distributed authority and aliases for brevity in
    coding. Java's reverse DNS is example of the fail of applying wordy
    namespacing without any real authority, the worst of all worlds.

    [1] http://wiki.commonjs.org/wiki/Packages/Mappings/C
    Kris
  • James Burke at Aug 23, 2010 at 9:18 pm

    On Thu, Aug 19, 2010 at 11:06 AM, Rawld Gill wrote:
    I guess I'm being slow, but I'm having trouble understanding the issue. Are
    you looking for feedback on the format/semantics of the "packages" and
    "packagePath" objects passed to require as discussed in the "Add package to
    project" section of you design overview?
    Just looking for general feedback. I am trying to outline how I think
    I can get something to work, and if anyone sees problems, it would be
    good to know sooner than later. Useful feedback is also "I do not plan
    on using this/do not want to use this, what about this instead".
    Other random thoughts....

    * I noticed that you mentioned that the multi-version load feature may have
    to be changed. Has your experience with this feature in requireJS changed
    any of your opinions about it?
    What I am considering is using the multiversion support in a different
    way, allowing a package/module to dynamically create a context if it
    needs a module that is at a different version than one used by another
    package/module in the system. I think still allowing top-level version
    contexts to be created is still useful to have.
    * Seems like there will need to be some kind of discipline imposed on
    package names (e.g., who gets the "rights" to the package named "grid").
    Otoh, nobody wants to write "require('org.dojo.grid')". This suggests that,
    at least on the browser side, it would be good to have some kind of aliasing
    feature.
    What Kris said, I believe the ability to map module names to paths per
    package helps avoid namespace ownership issues, coupled with the
    ability to create contexts per package if needed.
    * Similarly, a general solution will need the ability to load packages into
    their own dynamic namespace. The multi-version feature is a start...have you
    evolved your ideas here?
    Right, I think the context feature in RequireJS can be used to meet
    this need, with some tweaks -- allowing a context to be referenced by
    a set of modules in a package.

    However, I only want to create new contexts if there is an actual
    version conflict. One thing I do not like about what I perceive to be
    the model in CommonJS envs is for each module to get its own version
    of another module. I think there are important cases where you want a
    central dispatch module that many modules register with to communicate
    via things like publish/subscribe. In this case you want all modules
    to have the same handle on the common dispatch module. I believe this
    is necessary for any long-lived, dynamically updated apps, like those
    that are in a browser. For request-driven server side apps, it may be
    less of a concern.

    Feel free to outline something else if you see a different way to
    attack the problem.

    James
  • Kris Zyp at Aug 17, 2010 at 3:09 pm

    On 8/17/2010 12:32 PM, James Burke wrote:
    On Tue, Aug 17, 2010 at 10:27 AM, Kris Zyp wrote:

    I believe that requiring a dependency on a server side component
    (package manager) for first class development is probably non-starter
    for Dojo, and doesn't seem like a good idea for RequireJS either. I
    believe that the current approach (both in Dojo and RequireJS) of
    allowing developers to start developing without any server side
    executable, and then allowing them to later optimize for a improved
    performance is ideal. Developers should be able to start development
    with just a web server and have it Just Work without any server side effort.
    I think maybe we have a disconnect. The pkg.js script is something the
    developer runs on their box to just automate setting up their app
    config, and to also download a package to their local project. It is
    not something that runs on a server or is used during runtime as part
    of serving the web app.
    That's fine, I agree, but just as outlined in my options, we shouldn't
    *require* a developer to use such a tool. An additional step of
    executing something before you can use Dojo or RequireJS isn't required
    now, and shouldn't be required. Offering this as a tool to getting
    started is great though.
    I expect a great number of packages will be available from a web site
    to download to include in projects, but they do not want to become
    CDNs, serving modules to all end users' browsers that use an app that
    happened to depend on their package.

    Similarly, I think it hurts runtime performance to have to fetch
    remote package.json files, parse them, then configure the paths for
    every page load in the browser.
    Absolutely. And so does fetching individual modules instead of using a
    build (probably much more so in most cases). But we should still give
    developers this option so they can get up and running as easily as
    possible.
    So the goal in the design doc was to illustrate the configuration in
    RequireJS that could be coded manually (the "packagePaths" or
    "packages" properties), if the developer decides to download a package
    manually, but to also spec out a command line tool (pkg.js) that the
    developer could use to automate that tedious work of "download
    package, configure its path".
    Right, automating this helpful, and I am in support of providing such a
    tool, but if they want to do it manually, we should make it easy as
    possible. Learning packagePaths format in addition to the package
    mappings format doesn't seem like it is simplifying the users life.
    There is an option in the config so that the packagePaths/packages
    could refer to a remote module on a CDN/remote host, but it does not
    require parsing the package.json on the remote host for every page
    load, it is a config that should be set up once by the developer
    during development time/setup.
    Once again, I don't want to force decisions on our users. If they want
    to provide that information, to avoid package.json requests, great. If
    they don't, and just want to get things up and running quickly, which
    shouldn't limit this option.
    Maybe we can discuss this issue first, before getting to the rest of
    your response, because it seems like a fundamental design choice. It
    seems like you prefer the approach where module loader in the browser
    figures it out based on package.json contents, but it seems like that
    entails more network calls and still requires a developer to configure
    the location of the package.json file. If the developer has to do some
    configuration, might as well just configure the final package path.
    Hopefully most packages will not deviate from the "lib" and "main.js"
    norms so that the configuration will be terse/shorter than a path to a
    package.json.
    Sorry, by making browser-based package.json loading option a, I didn't
    mean to indicate a personal preference towards it. I don't prefer option
    a, but the world of Dojo users is bigger than any one single preference.
    (My preference would be option f, of course :) )
    Anyway, if there was only a single final package path to be configured
    that would be one thing, but it is the automation of the all the
    dependent packages that can be an important step for automation.

    --
    Thanks,
    Kris
  • Jonathan Bond-Caron at Aug 17, 2010 at 4:08 pm

    On Tue Aug 17 02:23 AM, James Burke wrote:
    In particular, I am trying to work out how to deal with packages. This
    has special relevance when considering dojox. I think it makes sense
    to start with CommonJS packages and see how they could be loaded in
    the browser. I made an outline of what I think that means for
    RequireJS
    here:

    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.m
    d
    CommonJS is solving a server side problem and their package format looks
    much like installing an application to an os.

    You might want to look into:
    http://www.openajax.org/member/wiki/OpenAjax_Metadata_Specification_Widget_M
    etadata

    It's in XML to be IDE friendly but there's an equivalent JSON format.

    Having a tool in dojo to create packages "pkg.js createApp appName" as a
    simple .zip would be amazing

    When it comes to using a specific "code loading strategy" for the package &
    dependencies, that's a different topic.
  • Kris Zyp at Aug 17, 2010 at 10:40 pm

    On 8/17/2010 2:08 PM, Jonathan Bond-Caron wrote:
    On Tue Aug 17 02:23 AM, James Burke wrote:

    In particular, I am trying to work out how to deal with packages. This
    has special relevance when considering dojox. I think it makes sense
    to start with CommonJS packages and see how they could be loaded in
    the browser. I made an outline of what I think that means for
    RequireJS
    here:

    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.m
    d
    CommonJS is solving a server side problem and their package format looks
    much like installing an application to an os.

    You might want to look into:
    http://www.openajax.org/member/wiki/OpenAjax_Metadata_Specification_Widget_M
    etadata

    It's in XML to be IDE friendly but there's an equivalent JSON format.
    The OpenAjax Metadata specification is for API information (methods,
    properties, classes, etc.) and is not really related at all to packages.
    CommonJS packages is definitely *the* package system for JavaScript.
    Having a tool in dojo to create packages "pkg.js createApp appName" as a
    simple .zip would be amazing
    Yes, absolutely, but that is not the only way Dojo is going to be used.
    When it comes to using a specific "code loading strategy" for the package &
    dependencies, that's a different topic.
    If you're developer that wants to have your code continue to work within
    different loading strategies, you better hope the framework developers
    didn't just dismiss it as a different topic :).

    Kris
  • Jonathan Bond-Caron at Aug 18, 2010 at 9:04 am

    On Tue Aug 17 10:40 PM, Kris Zyp wrote:
    On 8/17/2010 2:08 PM, Jonathan Bond-Caron wrote:
    On Tue Aug 17 02:23 AM, James Burke wrote:
    The OpenAjax Metadata specification is for API information (methods,
    properties, classes, etc.) and is not really related at all to
    packages.
    CommonJS packages is definitely *the* package system for JavaScript.
    I can agree for JavaScript but what about everything else that's available
    in the browser (CSS, images, flash, etc..)

    For example how would the package format
    (http://wiki.commonjs.org/wiki/Packages/1.0) look like for:
    http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/widget/tests/test_T
    oaster.html

    I'm not against CommonJS but would prefer to design a package format that
    deals with the browser environment and other ("v8", "ejs", "node", "rhino).

    Would it be better to have a single package format or 2+ (1 for commonjs, 1
    for the browser, ..?), I don't know.
  • Kris Zyp at Aug 23, 2010 at 6:46 pm

    On 8/18/2010 7:04 AM, Jonathan Bond-Caron wrote:
    On Tue Aug 17 10:40 PM, Kris Zyp wrote:
    On 8/17/2010 2:08 PM, Jonathan Bond-Caron wrote:

    On Tue Aug 17 02:23 AM, James Burke wrote:
    The OpenAjax Metadata specification is for API information (methods,
    properties, classes, etc.) and is not really related at all to
    packages.
    CommonJS packages is definitely *the* package system for JavaScript.
    I can agree for JavaScript but what about everything else that's available
    in the browser (CSS, images, flash, etc..)

    For example how would the package format
    (http://wiki.commonjs.org/wiki/Packages/1.0) look like for:
    http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/widget/tests/test_T
    oaster.html
    Good question, CommonJS packages give some precedent for some of the
    package layout, but there may still be some freedom for us to make some
    precedent. I'll suggest a package layout for dojox/grid (dojox/widget is
    a pretty poor example being a dumping ground for widgets that don't
    belong in dijit, some of the widgets in dojox/widget might even deserve
    their own package).
    package folder structure:
    - package.json
    - README
    + lib
    - DataGrid.js
    - *.js
    + resources
    - _Grid.html
    - Grid.css
    ...
    + test
    - test_data_grid.html
    - ...

    package.json:
    {
    "name":"dojox-grid",
    "contributors": ["Bryan Forbes", ...],
    "version": "1.6",
    "mappings": {
    "dijit": "../../dijit/"
    }
    }
    I'm not against CommonJS but would prefer to design a package format that
    deals with the browser environment and other ("v8", "ejs", "node", "rhino).

    Would it be better to have a single package format or 2+ (1 for commonjs, 1
    for the browser, ..?), I don't know.
    I don't see any reason to need a different package layout, but a
    different module format is needed for async module loading. This, of
    course, has been discussed extensively in the module threads.

    Kris
  • Kris Zyp at Aug 17, 2010 at 10:56 pm
    I created a ticket for the effort of adding package support to Dojo:
    http://bugs.dojotoolkit.org/ticket/11584
    I added a proposed (certainly don't have to use it, just a proposal)
    patch for adding package support to the client side loader. The patch is
    purely additive, wouldn't create any backwards incompatibilities and is
    very minimal. I think it would be great to have package management
    tools, templates, build support in there too.

    I would also love to see this coincide with the addition of async module
    loading like we have been talking about (using RequireJS style modules
    or something similar). To reiterate, introducing this in Dojo 2.0 is far
    too late. We can't make the 0.4->0.9 mistake again. Dojo 2.0 doesn't
    exist as an opportunity to add new stuff and remove old stuff at the
    same time, anything we are hoping to replace in Dojo 2.0, needs to have
    it's replacement introduced several versions ahead of 2.0. Our big
    changes like Dojo Data renovation, async module loading, DojoX
    packaging, etc. needs to be introduced and stable well ahead of 2.0
    release when we look at dropping old APIs. Let's get async module
    loading support in there as soon as possible.

    Thanks,
    Kris
  • Bill Keese at Aug 18, 2010 at 12:14 am

    On 8/18/10 11:56 AM, Kris Zyp wrote:
    Dojo 2.0 doesn't
    exist as an opportunity to add new stuff and remove old stuff at the
    same time, anything we are hoping to replace in Dojo 2.0, needs to have
    it's replacement introduced several versions ahead of 2.0. Our big
    changes like Dojo Data renovation, async module loading, DojoX
    packaging, etc. needs to be introduced and stable well ahead of 2.0
    release when we look at dropping old APIs. Let's get async module
    loading support in there as soon as possible.
    I agree. Admittedly though, any change added into 1.x causes
    destabilization and migration headaches for existing users, even if it's
    theoretically backwards compatible. (We've seen this with claro,
    Stateful, etc.) It's a tradeoff.

    Last time I looked at requireJS I thought we could support it
    concurrently with the current dojo.provide()/dojo.require() API.
    Unfortunately, James said he wasn't interested in doing that.
  • Dustin Machi at Aug 18, 2010 at 12:18 am
    We can support the package format in 1.x with standard dojo.provide/require without any issues and code changes. I made a branch last weekend to get it working and got most of that working, and in the next day or two i'll modify it to work with what kris described and sent in as a patch. It is independent of the requireJS stuff.

    Dustin

    On Aug 18, 2010, at 12:14 AM, Bill Keese wrote:
    On 8/18/10 11:56 AM, Kris Zyp wrote:
    Dojo 2.0 doesn't
    exist as an opportunity to add new stuff and remove old stuff at the
    same time, anything we are hoping to replace in Dojo 2.0, needs to have
    it's replacement introduced several versions ahead of 2.0. Our big
    changes like Dojo Data renovation, async module loading, DojoX
    packaging, etc. needs to be introduced and stable well ahead of 2.0
    release when we look at dropping old APIs. Let's get async module
    loading support in there as soon as possible.
    I agree. Admittedly though, any change added into 1.x causes
    destabilization and migration headaches for existing users, even if it's
    theoretically backwards compatible. (We've seen this with claro,
    Stateful, etc.) It's a tradeoff.

    Last time I looked at requireJS I thought we could support it
    concurrently with the current dojo.provide()/dojo.require() API.
    Unfortunately, James said he wasn't interested in doing that.
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
  • James Burke at Aug 23, 2010 at 8:40 pm
    I my laptop died and I was on a trip, and I am still catching up with
    email after getting the laptop restored. Some thoughts:

    Packages support:
    I think I understand Dustin's patch, but it seems limited to
    dijit/dojox, and anyone who created a dijit/dojox-namespaced module
    that was not part of the official tree would have to convert to the
    new directory layout as part of the 1.6 upgrade. Maybe that is fine,
    but want to mention it. What is not clear to me, maybe Dustin can
    explain more: what does the workflow look like for a developer that
    wants to use the new package directory structure? What do they have to
    register with the Dojo loader?

    On the dojo.registerPackageMappings in ticket #11584:
    I really do not like the idea of parsing package.json files on the fly
    in the browser as part of the loader. It seems very inefficient,
    limited by xdomain concerns, and something easily solved by the
    developer running a command to get the package. It is not clear how
    on-the-fly package.json work would work in the build tool too. Is the
    build tool expected to fetch the package.json file and embed it like
    other dojo.cache references? Is the build tool going to just do the
    work that registerPackageMappings does now? Should the build tool
    download the whole package for local reference so the modules can be
    inlined with a build layer? It feels like it would have to do the same
    work that the developer would do to just pull down the package
    manually. So that whole workflow might be good to spec out: what does
    the developer do during project setup, what does the loader do, what
    does the build tool do. How is dojo.moduleUrl affected, what about
    i18n modules (I assume nothing if the modulePath is set correctly)?

    These items are separate, but brought up on this thread, I suggest
    creating new threads if they want to be pursued:

    1) Dojo has an async loader now, the xdomain loader. If the goal is to
    just have some async loader, that one can be used. If you want it to
    be the default loader, that is a bit more work, and the syntax to my
    eyes is a bit verbose, but it works with code today/a build can output
    it.

    2) I am not a fan of supporting multiple module formats because it
    makes too many permutations for the build tool and the user. It seems
    like the build tool would have to support modules written in any of
    the formats, mixing and matching, and some loading characteristics are
    different. For example, i18n modules in RequireJS can be included in a
    build layer, they are not assumed to always be on-demand loaded. The
    assurances or set up for multiversion support may be different, and
    then the documentation of all of it scares me. Too much user choice
    will lead to a paralysis. It is a mess to support, and I have no
    desire to put work into it since I see it as effectively wasted
    effort. Better to wait for Dojo 2.0 for a module format change. End
    result, I am not volunteering for any work related to that for Dojo
    1.x. If someone else wants to, go for it, but I do not want to
    maintain it.

    James

    On Tue, Aug 17, 2010 at 9:18 PM, Dustin Machi wrote:
    We can support the package format in 1.x with standard dojo.provide/require without any issues and code changes. ?I made a branch last weekend to get it working and got most of that working, and in the next day or two i'll modify it to work with what kris described and sent in as a patch. ?It is independent of the requireJS stuff.

    Dustin

    On Aug 18, 2010, at 12:14 AM, Bill Keese wrote:

    ?On 8/18/10 11:56 AM, Kris Zyp wrote:
    Dojo 2.0 doesn't
    exist as an opportunity to add new stuff and remove old stuff at the
    same time, anything we are hoping to replace in Dojo 2.0, needs to have
    it's replacement introduced several versions ahead of 2.0. Our big
    changes like Dojo Data renovation, async module loading, DojoX
    packaging, etc. needs to be introduced and stable well ahead of 2.0
    release when we look at dropping old APIs. Let's get async module
    loading support in there as soon as possible.
    I agree. ? ?Admittedly though, any change added into 1.x causes
    destabilization and migration headaches for existing users, even if it's
    theoretically backwards compatible. ? (We've seen this with claro,
    Stateful, etc.) ? It's a tradeoff.

    Last time I looked at requireJS I thought we could support it
    concurrently with the current dojo.provide()/dojo.require() API.
    Unfortunately, James said he wasn't interested in doing that.
  • Bill Keese at Aug 24, 2010 at 12:14 am
    This is all true, although note that the idea is to deprecate the old
    format and remove it in 2.0, not to support both formats indefinitely.

    It's the same as backporting other 2.0 features into the 1.x branch:
    it's extra work for us in order to make our users happy, by making
    migration easier. (And as a side benefit it lets us hammer out the new
    features before we commit to them in 2.0)
    On 8/24/10 9:40 AM, James Burke wrote:
    2) I am not a fan of supporting multiple module formats because it
    makes too many permutations for the build tool and the user. It seems
    like the build tool would have to support modules written in any of
    the formats, mixing and matching, and some loading characteristics are
    different. For example, i18n modules in RequireJS can be included in a
    build layer, they are not assumed to always be on-demand loaded. The
    assurances or set up for multiversion support may be different, and
    then the documentation of all of it scares me. Too much user choice
    will lead to a paralysis. It is a mess to support, and I have no
    desire to put work into it since I see it as effectively wasted
    effort. Better to wait for Dojo 2.0 for a module format change. End
    result, I am not volunteering for any work related to that for Dojo
    1.x. If someone else wants to, go for it, but I do not want to
    maintain it.

    James
  • James Burke at Aug 24, 2010 at 12:29 am

    On Mon, Aug 23, 2010 at 9:14 PM, Bill Keese wrote:
    ?This is all true, although note that the idea is to deprecate the old
    format and remove it in 2.0, not to support both formats indefinitely.
    Right that does not sound so appetizing, the work to support the the
    multiple module formats will not live as long.
    It's the same as backporting other 2.0 features into the 1.x branch:
    it's extra work for us in order to make our users happy, by making
    migration easier. ?(And as a side benefit it lets us hammer out the new
    features before we commit to them in 2.0)
    I agree where the feature is a module that you can optionally include.
    A module format is a bit more work to support. It is not some extra
    module, it can affect all modules. And the build permutations to
    support more than one are ugly. I am not personally impacted by
    needing to bridge module formats, so it is hard for me to do the work
    for something I do not need. I would rather make sure the new thing
    works well without the legacy baggage. However, I fully support
    someone else doing the work, and I will gladly step out of the way if
    they do. I am happy to support translating dojo modules into
    RequireJS-formatted modules (I have a script in RequireJS to do that),
    but I do not want to support mixed mode.

    James
  • Dustin Machi at Aug 24, 2010 at 1:53 am
    Inline.
    Packages support:
    I think I understand Dustin's patch, but it seems limited to
    dijit/dojox, and anyone who created a dijit/dojox-namespaced module
    that was not part of the official tree would have to convert to the
    new directory layout as part of the 1.6 upgrade. Maybe that is fine,
    but want to mention it. What is not clear to me, maybe Dustin can
    explain more: what does the workflow look like for a developer that
    wants to use the new package directory structure? What do they have to
    register with the Dojo loader?
    This branch doesn't exist anymore but note that the point of it was to make it optional. One of the questions that arose from that test branch was how to deal with the namespacing of the dojox modules in the context of 1.x. I've since taken the feedback from these discussions and adapted dojo and dijit ( a few dojox modules soon to follow for testing there) to a version of Kris' registerPackageMapping.

    It can be seen here: http://github.com/dmachi/dojo . Note that I've disabled the robot tests in Dijit now to avoid some work for this prototype.

    Having gone through that now, I have a few comments and answers to some of your questions below:

    First, registerPackageMappings is additive. It reads the package.json and simply does registerModulePath based on the information loaded. If package.json doesn't exist, it simply uses a default mapping (lib/). While the build tool itself should read the package.json, at runtime its not required to exist. The package.json need only be consumed during development.
    On the dojo.registerPackageMappings in ticket #11584:
    I really do not like the idea of parsing package.json files on the fly
    in the browser as part of the loader. It seems very inefficient,
    limited by xdomain concerns, and something easily solved by the
    developer running a command to get the package.
    This only applies to xdomain, which requires a build anyway. Running a command to get the package while convenient for some simple use cases is a pain to use in version control. I find using externals to be as likely as manipulating things with the package manager for people doing non-trivial things. I would estimate maybe 75% of the customers I've worked with that are using svn use externals and such for joining their packages together. I'm not at all opposed to having or shipping with a package manager. Personally, I think its job should be limited to retrieving packages and dependencies and placing them in a directory as that maintains compatibility. If a transformation is required during development, its counter productive and not something i'd be interested in.

    It is not clear how
    on-the-fly package.json work would work in the build tool too. Is the
    build tool expected to fetch the package.json file and embed it like
    other dojo.cache references?
    It can be ignored, since the registerPackageMappings simply sets up module paths, which it can consume normally.
    Is the build tool going to just do the
    work that registerPackageMappings does now? Should the build tool
    download the whole package for local reference so the modules can be
    inlined with a build layer?
    I suppose it depends on what one wants the build tool to do, but assuming you want to make a completely local build, then it could easily invoke a request to retrieve the package (and sub packages) or invoke the package manager. Aside from the package retrieval, it needs to only copy from the source to the release in a flattened manner (remove the lib, resources, etc from the package directory) and then build works as before
    It feels like it would have to do the same
    work that the developer would do to just pull down the package
    manually. So that whole workflow might be good to spec out: what does
    the developer do during project setup, what does the loader do, what
    does the build tool do. How is dojo.moduleUrl affected, what about
    i18n modules (I assume nothing if the modulePath is set correctly)?
    In this test, the dijit package.json contains a structure like:

    "directories": {
    "lib": "lib",
    "resources": "resources",
    "tests": "tests",
    "nls": "lib/nls"
    },

    This does registerModulePath for :

    dijit -> ../plugins/dijit/lib
    dijit.resources -> ../plugins/dijit/resources
    dijit.tests -> ../plugins/dijit/tests
    dijit.nls -> ../plugins/dijit/lib/nls

    Aside from adjusting the paths in _our_ modules (no custom code that exists today needs to change). Adjusting the paths basically means dojo.moduleUrl("dijit", "resources/foo.bar"); needs to be dojo.moduleUrl("dijit.resources", "foo.bar"); This is a pretty simple change and again something that none the existing code needs to do . Since, in the end, the build system can run as it does now, including for x-domain builds, it will continue to work.
    These items are separate, but brought up on this thread, I suggest
    creating new threads if they want to be pursued:

    1) Dojo has an async loader now, the xdomain loader. If the goal is to
    just have some async loader, that one can be used. If you want it to
    be the default loader, that is a bit more work, and the syntax to my
    eyes is a bit verbose, but it works with code today/a build can output
    it.
    I thought that the general consensus here was that changing the loader out completely for 1.x is too big of a task and that would be a 2.x change. I'm fine with that, this work that I'm doing is only for 1.x at the moment.
    2) I am not a fan of supporting multiple module formats because it
    makes too many permutations for the build tool and the user. It seems
    like the build tool would have to support modules written in any of
    the formats, mixing and matching, and some loading characteristics are
    different. For example, i18n modules in RequireJS can be included in a
    build layer, they are not assumed to always be on-demand loaded. The
    assurances or set up for multiversion support may be different, and
    then the documentation of all of it scares me. Too much user choice
    will lead to a paralysis. It is a mess to support, and I have no
    desire to put work into it since I see it as effectively wasted
    effort. Better to wait for Dojo 2.0 for a module format change. End
    result, I am not volunteering for any work related to that for Dojo
    1.x. If someone else wants to, go for it, but I do not want to
    maintain it.
    I think this may be referring to a comment that I've missed somewhere. My interest, at this time, is simply in separating into packages so that this work can begin and its details be sorted out and problems identified. I do think the actual format of modules (as opposed to packages) is a bit much for the stable branch of dojo, however I do think it is important that for some of the change get exposed in ways that people can start playing with them. If we don't allow some of this to be sorted out in released code, we'll end up in one up a) creating a 0.x -> 1.x migration nightmare again, b) we'll want to change api's shortly after people start using it in practice, or c) we'll choose such a conservative approach to avoid a & b in 2.x that we gain little benefit out of the work.

    Dustin
  • James Burke at Aug 24, 2010 at 2:32 am

    On Mon, Aug 23, 2010 at 10:53 PM, Dustin Machi wrote:
    First, registerPackageMappings is additive. ?It reads the package.json and simply does registerModulePath based on the information loaded. If package.json doesn't exist, it simply uses a default mapping (lib/). ?While the build tool itself should read the package.json, at runtime its not required to exist. ? The package.json need only be consumed during development.
    If this means in dev we can see some 404s for some package.json files,
    I believe that is not a good direction to take. Seeing the 404s for
    i18n bundles now in dev is bad enough. I believe the errors that show
    up in the console from the 404s make the developer question the
    robustness of the toolkit, and it is yet another sharp edge to
    explain.
    This only applies to xdomain, which requires a build anyway. ?Running a command to get the package while convenient for some simple use cases is a pain to use in version control. ?I find using externals to be as likely as manipulating things with the package manager for people doing non-trivial things. ?I would estimate maybe 75% of the customers I've worked with that are using svn use externals and such for joining their packages together. I'm not at all opposed to having or shipping with a package manager. ?Personally, I think its job should be limited to retrieving packages and dependencies and placing them in a directory as that maintains compatibility. ?If a transformation is required during development, its counter productive and not something i'd be interested in.
    I agree about the need to have some packages already there in the
    source tree, and in that case the command line tool would just modify
    the app's top level JS module to add in the path mappings.

    I guess that is the main issue: something needs to tell the app about
    the paths. The choices seem to be:

    1) developer codes in URLs for package.json files, and loader tries to
    fetch those files on the fly and set paths

    2) developer uses command line tool to fetch package.json, and tool
    sets the paths in the code. Tool can also optionally download the
    package.

    3) developer manually configures the paths. This is not so bad if
    everything can be found inside the package's lib directory. This is
    probably hard to do for nested dependencies -- developer would need to
    trace nested dependencies manually. This seems like a step back.

    4) use a package tool to download package then flatten it Dojo 1.x
    baseUrl/modulePaths expectations. This seems hard to do if you want to
    allow developers to code modules in package directory layout, and have
    those in the source tree.

    So for me, it is down to #1 or #2. I favor #2. It will be much more
    efficient, and while it does mean the tool will open up and modify the
    top level JS file for a webapp, it is a more robust solution with
    fewer edge cases to explain. It does mean also that we insist more on
    having one top-level JS file per app, or perhaps allow burning in the
    paths to an HTML file. But to me, that is better than on-the fly work.

    But if you and Kris are really motivated to take the runtime
    package.json path for Dojo 1.x, feel free to take it, as long as you
    are OK documenting it all and modifying the build process accordingly.
    I have serious reservations about supporting that pattern for a 2.0
    (module format will change, so there will be migration costs), but if
    the rest of the community feels like that is a more appropriate path
    for 2.0, that is fine too. I do not make a living from using Dojo now,
    so I am probably more out of touch than the rest of the group. Right
    now, I do not see the need to support runtime package.json parsing for
    RequireJS.

    James
  • Dustin Machi at Aug 24, 2010 at 3:10 am

    If this means in dev we can see some 404s for some package.json files,
    I believe that is not a good direction to take. Seeing the 404s for
    i18n bundles now in dev is bad enough. I believe the errors that show
    up in the console from the 404s make the developer question the
    robustness of the toolkit, and it is yet another sharp edge to
    explain.
    Well its in fact whether there is a 404 or no lib/ defined, it can always be avoided simply by having one when you use a package. Requiring it to exist is fine with me when using packages.

    1) developer codes in URLs for package.json files, and loader tries to
    fetch those files on the fly and set paths
    This would be the case when a user does dojo.registerPackageMapping({"dijit": "../plugins/dijit"});. Simply registering it like this is what forces the package.json to be loaded.
    2) developer uses command line tool to fetch package.json, and tool
    sets the paths in the code. Tool can also optionally download the
    package.
    In what code do the paths get set? It seems pretty much the same thing as package.json, no? A build does this same process it seems to me. Instead of using registerPackageMappings, the build would simply generate registerModulePaths for the build.
    3) developer manually configures the paths. This is not so bad if
    everything can be found inside the package's lib directory. This is
    probably hard to do for nested dependencies -- developer would need to
    trace nested dependencies manually. This seems like a step back.
    Do you mean actually containing a dependent package within another package or are you just refering to a normal dependency heirarchy here?
    4) use a package tool to download package then flatten it Dojo 1.x
    baseUrl/modulePaths expectations. This seems hard to do if you want to
    allow developers to code modules in package directory layout, and have
    those in the source tree.
    Thats why using the existing system is easy and convienent. During development from a package, a package might have lib/, resources/, and tests/ directory. When the build tool runs, it simply creates /package/ and copies the contents of resources/ and lib/ into the release package folder. At that point, no more mapping is required, it behaves exactly how it does today.
    So for me, it is down to #1 or #2. I favor #2. It will be much more
    efficient, and while it does mean the tool will open up and modify the
    top level JS file for a webapp, it is a more robust solution with
    fewer edge cases to explain. It does mean also that we insist more on
    having one top-level JS file per app, or perhaps allow burning in the
    paths to an HTML file. But to me, that is better than on-the fly work.
    Since people complain about requiring builds as it is, it seems to me that requiring a build to use dojo is a step backwards. #2 also seems to be appropriate only to Dojo 2.x, but not address 1.x and steps between the two.

    Personally, my original goal was just to work out how things can be split up into a package structure shaped like commonjs (eg, lib/, resources/, etc), but not actually consume the package.json. This would simply assume that all packages are structured with standard subdir names. This was meant to be a stepping stone until the 2.x setup is worked out. After having implemented them, registerPackageSettings() appeared to be easier, more transparent and compatible with existing code. Is your recommendation to wait until 2.x to split off packages? It doesn't seem like this gives our users confidence or helps them migrate in pieces instead of a mad rush.

    Dustin
  • James Burke at Aug 24, 2010 at 3:34 pm

    On Tue, Aug 24, 2010 at 12:10 AM, Dustin Machi wrote:
    2) developer uses command line tool to fetch package.json, and tool
    sets the paths in the code. Tool can also optionally download the
    package.
    In what code do the paths get set? ?It seems pretty much the same thing as package.json, no? ?A build does this same process it seems to me. ?Instead of using registerPackageMappings, the build would simply generate registerModulePaths for the build.
    The paths get set in the main.js file for the app, more of that is
    described here:
    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.md

    In the Dojo 1.x world, the package tool can take an option that tells
    it what .js or .html file to use to inject the paths.

    A build would just combine modules into a smaller set of files to
    load. It would use the paths that were set in main.js by the package
    manager.
    3) developer manually configures the paths. This is not so bad if
    everything can be found inside the package's lib directory. This is
    probably hard to do for nested dependencies -- developer would need to
    trace nested dependencies manually. This seems like a step back.
    Do you mean actually containing a dependent package within another package or are you just refering to a normal dependency heirarchy here?
    I mean if the user has to manually do something like set a modulePaths
    for each lib/ directory in each module, then they need to look at all
    the dependencies of all the top level modules to set all the paths
    manually.
    4) use a package tool to download package then flatten it Dojo 1.x
    baseUrl/modulePaths expectations. This seems hard to do if you want to
    allow developers to code modules in package directory layout, and have
    those in the source tree.
    Thats why using the existing system is easy and convienent. ?During development from a package, a package might have lib/, resources/, and tests/ directory. ?When the build tool runs, it simply creates ? /package/ and copies the contents of resources/ and lib/ into the release package folder. ?At that point, no more mapping is required, it behaves exactly how it does today.
    So how do you deal with the case you described earlier, where a
    developer might have some packages in "source mode" via externals?
    Sounds like their content would be modified and placed in a packages
    directory. It means that step has to be run for every modification to
    that external. I thought that was something you wanted to avoid.
    Since people complain about requiring builds as it is, it seems to me that requiring a build to use dojo is a step backwards. ? #2 also seems to be appropriate only to Dojo 2.x, but not address 1.x and steps between the two.
    I favor option #2 which is not a build tool but a path setting tool
    with the option to download a remote package to a local directory. I
    believe that is more approachable than a build. Since it modifies
    source files to inject paths, then it could be used with a Dojo 1.x
    project. Although I would rather not do the work to support Dojo 1.x.
    Personally, my original goal was just to work out how things can be split up into a package structure shaped like commonjs (eg, lib/, resources/, etc), but not actually consume the package.json. ?This would simply assume that all packages are structured with standard subdir names. This was meant to be a stepping stone until the 2.x setup is worked out. ?After having implemented them, registerPackageSettings() appeared to be easier, more transparent and compatible with existing code. ?Is your recommendation to wait until 2.x to split off packages? ?It doesn't seem like this gives our users confidence or helps them migrate in pieces instead of a mad rush.
    Yes, I would not bother with packages for Dojo 1.x, but that is skewed
    by my personal experience. I do not need to support Dojo 1.x projects.
    In my mind, the module structure in a Dojo 2.x will be noticeably
    different than Dojo 1.x, so there will be migration costs. I think
    this gets into a larger discussion about how important the migration
    story is for Dojo 1.x users. I would rather work out something that
    did not consider legacy support first, then once that is worked out,
    work backwards to see how the migration. For instance, even though the
    0.4 to 1.0 change might have been painful, I believe it was the right
    kind of change to do.

    For Dojo 1.x though, if you and Kris are really motivated and others
    see value in the approach you are trying, go for it, you can own that
    code.

    James
  • Rawld Gill at Aug 18, 2010 at 1:33 pm

    -----Original Message-----
    From: Bill Keese
    Sent: Tuesday, August 17, 2010 9:15 PM
    To: dojo-contributors at mail.dojotoolkit.org
    Subject: Re: [dojo-contributors] Package support
    Last time I looked at requireJS I thought we could support it
    concurrently with the current dojo.provide()/dojo.require() API.
    It is easy to change the dojo code stack support both the current loader and
    an asynchronous loader. I described how to do this previously
    (http://mail.dojotoolkit.org/pipermail/dojo-contributors/2010-May/022465.htm
    l) and published the results of the experiment at
    http://github.com/rcgill/dojo-1-5/commits/master.

    Coincidentally, I'll be re-attacking this project next week to finish the
    bootstrap changes, bring in the v1.5 release, and clean up a few other loose
    ends.

    As I see it, there are three issues:

    1. How are modules expressed?

    2. What loader(s) are included in dojo?

    3. Can the bootstrap be improved by making it smaller and more modular by
    decoupling the loader from the bootstrap?

    So long as we agree to express modules in a form usable by an asynchronous
    loader, then we do not limit the code to any particular loader. The current
    loader (with 20 lines of changes), requireJS, the backdraft loader, and
    others should work find.

    I propose that we work hard to agree on [1] and do not make [2] a
    prerequisite to [1].

    Best,
    Rawld
  • Jonathan Bond-Caron at Aug 18, 2010 at 3:21 pm

    On Wed Aug 18 01:33 PM, Rawld Gill wrote:
    -----Original Message-----
    From: Bill Keese
    Sent: Tuesday, August 17, 2010 9:15 PM
    To: dojo-contributors at mail.dojotoolkit.org
    Subject: Re: [dojo-contributors] Package support
    As I see it, there are three issues:

    1. How are modules expressed?

    2. What loader(s) are included in dojo?

    3. Can the bootstrap be improved by making it smaller and more modular
    by decoupling the loader from the bootstrap?

    So long as we agree to express modules in a form usable by an
    asynchronous loader, then we do not limit the code to any particular
    loader. The current loader (with 20 lines of changes), requireJS, the
    backdraft loader, and others should work find.

    I propose that we work hard to agree on [1] and do not make [2] a
    prerequisite to [1].
    All sounds great but calling it a 'module', the term 'package' sounds more
    like its system independent

    Would the following approach work for [2] ~

    dojo._loader = new dojo.xhr.loader; // default could be async
    (~dojo.script.loader), I have no opinion on what dojo should do by default
    dojo._loaders['dojo.css'] = new dojo.css.loader;
    dojo._loaders['dojo.i18n'] = new dojo.i18n.loader;

    dojo.require(id) = function(id){
    if(this._loaded[id])
    return true;

    for(ns in this._loaders){
    if(id.substr(0, ns.length) === ns)
    return this._loaders[ns].import(id);
    }

    dojo._loader.import(id); // default loader, chosen based on environment,
    rhino etc.. or djConfig
    }

    dojo.requireNS(ns,id) = function(id){
    return this._loaders[ns] ? this._loaders[ns].import(id) :
    dojo._loader.import(id);
    }

    For example, CommonJS files could be loaded using ~
    var ld = new CommonJS.loader;
    ld.option = 'something';

    dojo._loaders['commonjs'] = ld;

    dojo.require('commonjs.dojo.date');
    dojo.requireNS('commonjs','dojo.date');
  • Kris Zyp at Aug 24, 2010 at 4:07 pm

    On 8/18/2010 11:33 AM, Rawld Gill wrote:
    -----Original Message-----
    From: Bill Keese
    Sent: Tuesday, August 17, 2010 9:15 PM
    To: dojo-contributors at mail.dojotoolkit.org
    Subject: Re: [dojo-contributors] Package support
    Last time I looked at requireJS I thought we could support it
    concurrently with the current dojo.provide()/dojo.require() API.
    It is easy to change the dojo code stack support both the current loader and
    an asynchronous loader. I described how to do this previously
    (http://mail.dojotoolkit.org/pipermail/dojo-contributors/2010-May/022465.htm
    l) and published the results of the experiment at
    http://github.com/rcgill/dojo-1-5/commits/master.

    Coincidentally, I'll be re-attacking this project next week to finish the
    bootstrap changes, bring in the v1.5 release, and clean up a few other loose
    ends.

    As I see it, there are three issues:

    1. How are modules expressed?
    I favor the RequireJS API (plus CommonJS transport/D, they are close,
    shimming one to the other is but a few lines of code) as the favored
    module format.

    2. What loader(s) are included in dojo?
    Dojo sync, Dojo XD, CommonJS sync, and RequireJS would be included in
    Dojo. My preference would be that Dojo sync or XD (based on build) plus
    RequireJS would be included in the default build of core for 1.x, and
    then only RequireJS in the default build of core for 2.0.
    3. Can the bootstrap be improved by making it smaller and more modular by
    decoupling the loader from the bootstrap?
    +1 to that :).

    Do you have code ready to check into branch (or something) for us to
    start testing/prototyping with and compare with a RequireJS inclusion?
    Thanks,
    Kris
  • Bill Keese at Aug 17, 2010 at 10:36 pm
    I understand the part about loading CommonJS packages in the
    browser. But I'm confused, what does this have to do with dojox?

    At first I thought you were talking about some kind of zip file format,
    like maybe wrapping up a widget's JS, HTML, CSS, and images into a
    single download, but we already do most of that with the build tool
    (except for the images, where we use sprites).
    On 8/17/10 3:23 PM, James Burke wrote:
    This is in relation to Dojo 2.0 discussions. While it concerns an
    implementation in RequireJS, I do not assume that RequireJS will be
    the module loader for Dojo 2.0. I hope it will though, and this
    discussion is to make sure what RequireJS might implement would fit
    with Dojo's needs.

    In particular, I am trying to work out how to deal with packages. This
    has special relevance when considering dojox. I think it makes sense
    to start with CommonJS packages and see how they could be loaded in
    the browser. I made an outline of what I think that means for
    RequireJS here:

    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.md

    Please feel free to comment on this thread, or in the RequireJS group's thread:
    http://groups.google.com/group/requirejs/browse_thread/thread/e724016dc8eb8860

    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
  • Kris Zyp at Aug 17, 2010 at 11:06 pm

    On 8/17/2010 8:36 PM, Bill Keese wrote:
    I understand the part about loading CommonJS packages in the
    browser. But I'm confused, what does this have to do with dojox?
    I believe one of our goals is to break DojoX up into separately managed
    projects/packages. Each of these projects can be packaged as a CommonJS
    package and with adequate package support, they can all be used from
    Dojo applications effortlessly, even though they are packaged and maybe
    even distributed separately. My proposal is that moving forward with
    should be done in way that users don't have to take on a different
    namespace/URL design. While each DojoX package should be downloadable in
    CommonJS package format (a package.json in the root, js files in lib
    directory, etc), that the net result of installing each package (with
    the recommended alias/name) would be identical to the current DojoX
    directory structure. This provides a seamless, consistent way forward
    into the world of CommonJS-based JavaScript packages.

    Thanks,
    Kris
  • James Burke at Aug 23, 2010 at 9:07 pm

    On Tue, Aug 17, 2010 at 7:36 PM, Bill Keese wrote:
    ?I understand the part about loading CommonJS packages in the
    browser. ? But I'm confused, what does this have to do with dojox?
    For a Dojo 2.0 I would expect dojox modules to be packaged as packages
    that conform to the CommonJS package format.
    At first I thought you were talking about some kind of zip file format,
    like maybe wrapping up a widget's JS, HTML, CSS, and images into a
    single download, but we already do most of that with the build tool
    (except for the images, where we use sprites).
    This is more about packaging a set of modules that can be pulled into
    a project when a developer needs them. So, only if the user needed a
    dojox.gfx, they would use a tool to retrieve the zip of those files
    for inclusion in their app. How the zip file is retrieved, how, when
    and where it is unpacked, are the mechanics I am trying to sort out.

    James
  • Eugene Lazutkin at Aug 24, 2010 at 1:13 am
    I've read the whole thread up to now, and anchor my post at the top for
    convenience.

    I have a following sketch/developer' story in my mind:

    =======================

    1) I assume that third-party and our packages are in CommonJS format ---
    James and Kris both were talking about that, so I assume it as a
    starting point.

    2) Any kind of JS development nowadays (since Firefox killed development
    directly from the file system) requires two things: a directory, and its
    mapping to your local web server space. The former one is to know where
    to develop JS/HTML/CSS code, and the latter one is to see effects in the
    browser. As soon as we have that, we have a place to put our code in.

    The package manager should know about that, or expose enough info to
    implement it. This is the place where it keeps its cache of packages.

    3) If we are to develop code in CommonJS format, so it can be consumed
    by non-browser environments, we cannot run a transformation tool after
    every minute change --- it becomes old fairly quick.

    There are two solutions for that: the server-side preprocessor, which
    kind of hides the transformation, or a loader, which takes in CommonJS
    modules/packages raw. The latter by necessity will be XHR-based (no
    xdomain), and synchronous --- should be fine for development, but a
    no-no for actual deployment. This situation is somewhat similar to what
    we have today in Dojo.

    4) Looking back at package managers for Ruby and Python, I think it is
    hard to underestimate the value of such tools. It helps to sort out the
    vagaries of actual packaging and installation, which can include running
    custom scripts, but most importantly it helps with two things ---
    satisfying dependencies, and eventual updates to newer versions.

    In our case it can "optimize" packages for our environment, specifically
    it can pre-process them, to add a proper harness, so they can be loaded
    xdomain, and asynchronously. I think that at this point (in the local
    cache) they still should be unminified/uninlined, so we can debug freely.

    5) The package manager works with a directory defined in #2. At the very
    least it can get/set the directory and its mapping. Let split the
    directory into two areas (assuming the directory is the root '/' for
    simplicity and brevity of examples): /commonjs for packages in raw
    CommonJS format, and /packages for optimized packages.

    If programmer wants to develop in CommonJS format, she can create a
    package right in /commonjs. All manually downloaded and unpacked
    packages go in /commonjs too (or a symlink goes there).

    The package manager maintains its cache of modules in /packages. It
    should be smart enough to look in /commonjs first, when user wants to
    install a package. (I am skipping the details about global repositories,
    their index, and so on).

    The list of all packages pre-processed by the package manager are kept
    in /packages/index.js (feel free to bikeshed the name), which is a
    regular RequireJS (or similar) format, which can be loaded asynchronously.

    6) If we are talking about RequireJS loader and CoomonJS modules, then
    loading a CommonJS package, would require an XHR-based *synchronous*
    plugin, which should be used only for development. Let's call it the
    development loader.

    The development loader starts by loading /packages/index.js. All
    packages from the list are loaded regularly from /packages. The rest is
    assumed to be in the CommonJS format => they should be loaded
    synchronously from /commonjs.

    I want to emphasize that synchronous stuff is used only for CommonJS
    packages, everything else is loaded like RequireJS does it now --
    asynchronously. If you don't use CommonJS modules (not counting the
    converted ones in /packages) *you don't pay for it* save for
    /packages/index.js loading. Don't want even that? Use the production
    loader immediately.

    I assume that supporting the third option (server-side handlers, like
    Kris' Transporter) is trivial, so if user sets it up, it can be used
    auto-magically. Feel free to detail it.

    7) The production mode is done separately. We need a build tool (like we
    have now) which can produce a directory suitable for deployment --- all
    files minified, HTML is inlined, CSS is processed, layers are assembled,
    and so on. For externally accessed packages it should produce a mapping,
    which will map package names to proper URLs. This mapping should be used
    by the production loader. Obviously no synchronous XHR-based stuff is
    allowed at this point.

    8) I want to say explicitly that the sketch above requires two modes of
    loading: the development one (sync, no xdomain assumed) and the
    production one (async, xdomain-ready). Both are trivial to do. The build
    process can bundle a production loader with it skipping the development
    one to save space.

    =======================

    I think it covers most of what James and Kris were talking about. It
    allows manual tinkering by a developer. It doesn't force her to use any
    tools, but using them, will help the development process. She can
    download and install whatever she likes, yet it provides for all
    niceties we are accustomed to have during the development, and
    deployment. All of that while a developer works with CommonJS modules,
    even developing them.

    Obviously if we are to use for development a format incompatible with
    other environments, and sacrificing the CommonJS development, the
    picture above can be tweaked a lot.

    Thoughts? Comments?

    Eugene Lazutkin
    http://lazutkin.com/

    On 08/17/2010 01:23 AM, James Burke wrote:
    This is in relation to Dojo 2.0 discussions. While it concerns an
    implementation in RequireJS, I do not assume that RequireJS will be
    the module loader for Dojo 2.0. I hope it will though, and this
    discussion is to make sure what RequireJS might implement would fit
    with Dojo's needs.

    In particular, I am trying to work out how to deal with packages. This
    has special relevance when considering dojox. I think it makes sense
    to start with CommonJS packages and see how they could be loaded in
    the browser. I made an outline of what I think that means for
    RequireJS here:

    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.md

    Please feel free to comment on this thread, or in the RequireJS group's thread:
    http://groups.google.com/group/requirejs/browse_thread/thread/e724016dc8eb8860

    James
  • James Burke at Aug 24, 2010 at 1:57 am

    On Mon, Aug 23, 2010 at 10:13 PM, Eugene Lazutkin wrote:
    1) I assume that third-party and our packages are in CommonJS format ---
    James and Kris both were talking about that, so I assume it as a
    starting point.
    I was actually *not* going to allow authoring in CommonJS
    format/loading those files directly in the browser loader: I was
    thinking the package manager could convert CommonJS modules into
    RequireJS format, but I do not want to support loading CommonJS
    modules directly. One of the main points of RequireJS is to be able to
    use a module format that works best natively in the browser without
    the XHR/eval tricks, and I do not want to have two loader paths. It is
    probably possible to provide a converter tool that converts RequireJS
    modules to CommonJS modules, as long as the CommonJS env supports
    setting the module export value.

    Otherwise, in what I wrote up, my .packages directory is like your
    /packages although all my paths are relative the web app directory.
    When you say /packages do you mean relative to the web app's directory
    or relative to the host name of the web server?

    Instead of having a packages/index.js, I would modify the app.js for
    the web app, the top level file I assumed that would be loaded by the
    webapp, which i assumed would be a sibling to the packages or
    .packages directory. In this way, we can avoid one extra HTTP call
    (instead of having app.js require packages/index.js).

    For what it is worth, I believe RequireJS works from local disk,
    except if you want to use the text! plugin, which uses an XHR call.

    James
  • Eugene Lazutkin at Aug 24, 2010 at 2:44 pm
    Inline.
    On 8/24/10 12:57 AM, James Burke wrote:
    On Mon, Aug 23, 2010 at 10:13 PM, Eugene Lazutkin wrote:
    1) I assume that third-party and our packages are in CommonJS format ---
    James and Kris both were talking about that, so I assume it as a
    starting point.
    I was actually *not* going to allow authoring in CommonJS
    Hmm. Why? Censorship? ;-) Just by saying that you are cutting off users,
    who want to develop traditional CommonJS modules with Dojo. Is it really
    worth it?

    I suspect that if/when CommonJS goes to masses, the said masses would
    prefer a browser environment to debug modules --- mostly because of
    Firebug and other debugging tools already available and relatively
    mature. By cutting them off we serve them to our "competitors". Even now
    there are several tools of different quality, which load regular
    CommonJS modules into browser.
    format/loading those files directly in the browser loader: I was
    thinking the package manager could convert CommonJS modules into
    RequireJS format, but I do not want to support loading CommonJS
    modules directly. One of the main points of RequireJS is to be able to
    use a module format that works best natively in the browser without
    the XHR/eval tricks, and I do not want to have two loader paths. It is
    probably possible to provide a converter tool that converts RequireJS
    modules to CommonJS modules, as long as the CommonJS env supports
    setting the module export value.

    Otherwise, in what I wrote up, my .packages directory is like your
    /packages although all my paths are relative the web app directory.
    When you say /packages do you mean relative to the web app's directory
    or relative to the host name of the web server?
    I have to stop writing long posts --- nobody reads past the first
    paragraph anyway. :-) Let me repeat once more: "assuming the directory
    is the root '/' for simplicity and brevity of examples". So, no, I do
    not assume any absolute paths, all paths in my examples are relative. I
    just used '/' to type less.
    Instead of having a packages/index.js, I would modify the app.js for
    the web app, the top level file I assumed that would be loaded by the
    webapp, which i assumed would be a sibling to the packages or
    .packages directory. In this way, we can avoid one extra HTTP call
    (instead of having app.js require packages/index.js).
    Somehow I missed app.js --- what is it? Could you give me a link so I
    can read up on it? Both links in your original email do not mention it,
    and I couldn't find any mention of it in this thread.
    For what it is worth, I believe RequireJS works from local disk,
    except if you want to use the text! plugin, which uses an XHR call.
    Hmm, it would be nice to verify.
    James
    Cheers,

    Eugene
  • James Burke at Aug 24, 2010 at 3:06 pm

    On Tue, Aug 24, 2010 at 11:44 AM, Eugene Lazutkin wrote:
    Hmm. Why? Censorship? ;-) Just by saying that you are cutting off users,
    who want to develop traditional CommonJS modules with Dojo. Is it really
    worth it?

    I suspect that if/when CommonJS goes to masses, the said masses would
    prefer a browser environment to debug modules --- mostly because of
    Firebug and other debugging tools already available and relatively
    mature. By cutting them off we serve them to our "competitors". Even now
    there are several tools of different quality, which load regular
    CommonJS modules into browser.
    Right, and all of those loaders are inferior. XHR+eval loading has
    extra baggage, and frankly I believe CommonJS module format optimized
    for the wrong things. The typing differences between the formats are
    not that great, particularly when considering the spent trying to
    debug evaled code in the browser. Sure, not every developer needs to
    do work in the browser, but many do, and having a consistent format
    that works well across environments in the end is more important. And
    the amount of typing is really not that much different. And for crying
    out loud, JavaScript started in the browser. Damn it. OK, flame off.

    In short, I am doing this package work because I believe the RequireJS
    syntax is better. I can reuse CommonJS modules but those modules are
    not as flexible or JavaScripty as RequireJS ones (setting exported
    value to a function is standard in RequireJS). By being able to reuse
    and transform CommonJS modules for a system that works well in the
    browser, I am making a bet that it will become the dominate syntax for
    modules.

    I appreciate though that it may ultimately end in failure, which is
    fine. I do not think it will be utter failure, it will just mean that
    CommonJS modules will always be something different, but the ability
    to transform CommonJS modules means that RequireJS-based projects can
    still thrive on their own. If CommonJS gets a standard way to set
    exports, there can be a tool to convert back to CommonJS format too.

    I can fully appreciate if the Dojo community does not want to come
    along for that ride. So feel free to disagree, so we can work out what
    is best for the Dojo folks.
    Somehow I missed app.js --- what is it? Could you give me a link so I
    can read up on it? Both links in your original email do not mention it,
    and I couldn't find any mention of it in this thread.
    I keep thinking app.js, but I used main.js in the package writeup here:
    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.md

    So please s/app.js/main.js/, sorry for mixing it up.

    James
  • Tom Trenka at Aug 24, 2010 at 3:58 pm
    "XHR+eval loading has
    extra baggage, and frankly I believe CommonJS module format optimized
    for the wrong things. The typing differences between the formats are
    not that great, particularly when considering the spent trying to
    debug evaled code in the browser. Sure, not every developer needs to
    do work in the browser, but many do, and having a consistent format
    that works well across environments in the end is more important."

    I would submit (having a lot of "old" experience on this one) that the needs
    of a server dev is not the same needs of a client dev. Trying to meld the
    two is already leading to trouble (witness the traffic on these threads),
    and maybe...just maybe...having a single loader system for both is not such
    a good idea.

    It sounds to me like James is advocating a Linux-like package system (i.e.
    one shot, aimed at a developer with the tools needed to actually grab said
    package), and everyone else is remembering that (before loaders like Dojo,
    Closure, and LAB) you just needed to ask for the specific resource. I can
    understand (and agree with) most of the stories told here, but the reality
    is that the higher the barrier to entry, the less people will work with the
    system in question, unless forced to.

    I personally don't want to be associated with "being forced to", so I'd have
    to agree with Kris on this one, but I'd suggest that maybe the whole
    package.json is not the answer either; elements of it might be. Lots of
    good arguments on all sides, and I'm enjoying the debate =) In the end
    though, barrier-to-entry is key, and *any* tools that we require people to
    try to download and use are a barrier to entry, especially if the devs in
    question feel that they are forced to use said tools.

    Dumb question: has anyone ever considered a server-side solution (not
    necessarily Rhino-based) that might collect a set of JS files from a
    development app (aka HTML file or whatever) and attempt to assemble a build
    based on that? I threw that at Alex a long time ago, but it got lost in the
    noise.

    Regards--
    Tom
    On Tue, Aug 24, 2010 at 2:06 PM, James Burke wrote:
    On Tue, Aug 24, 2010 at 11:44 AM, Eugene Lazutkin wrote:
    Hmm. Why? Censorship? ;-) Just by saying that you are cutting off users,
    who want to develop traditional CommonJS modules with Dojo. Is it really
    worth it?

    I suspect that if/when CommonJS goes to masses, the said masses would
    prefer a browser environment to debug modules --- mostly because of
    Firebug and other debugging tools already available and relatively
    mature. By cutting them off we serve them to our "competitors". Even now
    there are several tools of different quality, which load regular
    CommonJS modules into browser.
    Right, and all of those loaders are inferior. XHR+eval loading has
    extra baggage, and frankly I believe CommonJS module format optimized
    for the wrong things. The typing differences between the formats are
    not that great, particularly when considering the spent trying to
    debug evaled code in the browser. Sure, not every developer needs to
    do work in the browser, but many do, and having a consistent format
    that works well across environments in the end is more important. And
    the amount of typing is really not that much different. And for crying
    out loud, JavaScript started in the browser. Damn it. OK, flame off.

    In short, I am doing this package work because I believe the RequireJS
    syntax is better. I can reuse CommonJS modules but those modules are
    not as flexible or JavaScripty as RequireJS ones (setting exported
    value to a function is standard in RequireJS). By being able to reuse
    and transform CommonJS modules for a system that works well in the
    browser, I am making a bet that it will become the dominate syntax for
    modules.

    I appreciate though that it may ultimately end in failure, which is
    fine. I do not think it will be utter failure, it will just mean that
    CommonJS modules will always be something different, but the ability
    to transform CommonJS modules means that RequireJS-based projects can
    still thrive on their own. If CommonJS gets a standard way to set
    exports, there can be a tool to convert back to CommonJS format too.

    I can fully appreciate if the Dojo community does not want to come
    along for that ride. So feel free to disagree, so we can work out what
    is best for the Dojo folks.
    Somehow I missed app.js --- what is it? Could you give me a link so I
    can read up on it? Both links in your original email do not mention it,
    and I couldn't find any mention of it in this thread.
    I keep thinking app.js, but I used main.js in the package writeup here:
    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.md

    So please s/app.js/main.js/, sorry for mixing it up.

    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.dojotoolkit.org/pipermail/dojo-contributors/attachments/20100824/4ca54d01/attachment-0001.htm
  • Eugene Lazutkin at Aug 24, 2010 at 9:48 pm
    Allow me to illustrate the James' vision of the Dojo 2.0.

    Imagine that there a developer, let's call him Tom, who decided to
    produce a package, which can be used in browsers, and on other
    platforms. The example of such module can be something like
    dojox.collections, dojox.encoding, dojox.math, dojox.string, or any
    other generic things: json, xml, date, color utilities, or some generic
    language-specific stuff. So naturally Tom wants to cover a lot with
    less. And he thinks: "Wait a second! I can develop a CommonJS package,
    so it can be used by server-side folks immediately, *and* I can import
    it as a RequireJS module later, to use in RequireJS-based frameworks,
    including Dojo 2.0!" The decision is made and Tom faces a simple task
    - --- developing the package in question. Amazingly he cannot do it in the
    browser with Dojo 2.0, and he is forced to use any other CommonJS
    platform no matter how terrible it is. But he knows that if he needs to
    change anything, he works with one format: CommonJS. The rest is
    automatic due to the RequireJS' importer (part of the package manager).

    Alternatively he develops everything with Dojo 2.0 and back-ports it to
    CommonJS manually, which appears to be relatively simple. Now Tom
    supports two versions. Every time he makes a fix/enhancement in the
    master package, he has to back-port the change. If he found a
    platform-specific error, and forced to modify the CommonJS copy first,
    he'll have to propagate it forth too.

    See Tom run. To some other platform. From Dojo 2.0. Run, Tom, run!

    In this scenario I see Dojo an exclusively browser-based toolkit. I
    don't say it is bad, just that it contradicts our stated directions to
    establish Dojo as a multi-platform JavaScript toolkit. When I asked if
    we support anything outside of a browser in the Dojo 2.0 thread, we got
    3 votes for Yes (Adam, James, and I), 0 votes for No, and 0 for Maybe.
    It concerns me a lot.

    Cheers,

    Eugene Lazutkin
    http://lazutkin.com/

    PS: If somebody feels that my CommonJS story is incorrect, please tell
    me where I am wrong --- I hope it is all a misunderstanding, or I am
    missing something.

    On 08/24/2010 02:58 PM, Tom Trenka wrote:
    "XHR+eval loading has
    extra baggage, and frankly I believe CommonJS module format optimized
    for the wrong things. The typing differences between the formats are
    not that great, particularly when considering the spent trying to
    debug evaled code in the browser. Sure, not every developer needs to
    do work in the browser, but many do, and having a consistent format
    that works well across environments in the end is more important."

    I would submit (having a lot of "old" experience on this one) that the
    needs of a server dev is not the same needs of a client dev. Trying to
    meld the two is already leading to trouble (witness the traffic on these
    threads), and maybe...just maybe...having a single loader system for
    both is not such a good idea.

    It sounds to me like James is advocating a Linux-like package system
    (i.e. one shot, aimed at a developer with the tools needed to actually
    grab said package), and everyone else is remembering that (before
    loaders like Dojo, Closure, and LAB) you just needed to ask for the
    specific resource. I can understand (and agree with) most of the
    stories told here, but the reality is that the higher the barrier to
    entry, the less people will work with the system in question, unless
    forced to.

    I personally don't want to be associated with "being forced to", so I'd
    have to agree with Kris on this one, but I'd suggest that maybe the
    whole package.json is not the answer either; elements of it might be.
    Lots of good arguments on all sides, and I'm enjoying the debate =) In
    the end though, barrier-to-entry is key, and *any* tools that we require
    people to try to download and use are a barrier to entry, especially if
    the devs in question feel that they are forced to use said tools.

    Dumb question: has anyone ever considered a server-side solution (not
    necessarily Rhino-based) that might collect a set of JS files from a
    development app (aka HTML file or whatever) and attempt to assemble a
    build based on that? I threw that at Alex a long time ago, but it got
    lost in the noise.

    Regards--
    Tom

    On Tue, Aug 24, 2010 at 2:06 PM, James Burke <jburke at dojotoolkit.org
    wrote:

    On Tue, Aug 24, 2010 at 11:44 AM, Eugene Lazutkin
    <eugene at lazutkin.com wrote:
    Hmm. Why? Censorship? ;-) Just by saying that you are cutting off users,
    who want to develop traditional CommonJS modules with Dojo. Is it really
    worth it?

    I suspect that if/when CommonJS goes to masses, the said masses would
    prefer a browser environment to debug modules --- mostly because of
    Firebug and other debugging tools already available and relatively
    mature. By cutting them off we serve them to our "competitors". Even now
    there are several tools of different quality, which load regular
    CommonJS modules into browser.
    Right, and all of those loaders are inferior. XHR+eval loading has
    extra baggage, and frankly I believe CommonJS module format optimized
    for the wrong things. The typing differences between the formats are
    not that great, particularly when considering the spent trying to
    debug evaled code in the browser. Sure, not every developer needs to
    do work in the browser, but many do, and having a consistent format
    that works well across environments in the end is more important. And
    the amount of typing is really not that much different. And for crying
    out loud, JavaScript started in the browser. Damn it. OK, flame off.

    In short, I am doing this package work because I believe the RequireJS
    syntax is better. I can reuse CommonJS modules but those modules are
    not as flexible or JavaScripty as RequireJS ones (setting exported
    value to a function is standard in RequireJS). By being able to reuse
    and transform CommonJS modules for a system that works well in the
    browser, I am making a bet that it will become the dominate syntax for
    modules.

    I appreciate though that it may ultimately end in failure, which is
    fine. I do not think it will be utter failure, it will just mean that
    CommonJS modules will always be something different, but the ability
    to transform CommonJS modules means that RequireJS-based projects can
    still thrive on their own. If CommonJS gets a standard way to set
    exports, there can be a tool to convert back to CommonJS format too.

    I can fully appreciate if the Dojo community does not want to come
    along for that ride. So feel free to disagree, so we can work out what
    is best for the Dojo folks.
    Somehow I missed app.js --- what is it? Could you give me a link so I
    can read up on it? Both links in your original email do not
    mention it,
    and I couldn't find any mention of it in this thread.
    I keep thinking app.js, but I used main.js in the package writeup here:
    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.md

    So please s/app.js/main.js/, sorry for mixing it up.

    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    <mailto:dojo-contributors at mail.dojotoolkit.org>
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors




    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
  • James Burke at Aug 24, 2010 at 11:54 pm

    On Tue, Aug 24, 2010 at 6:48 PM, Eugene Lazutkin wrote:
    Allow me to illustrate the James' vision of the Dojo 2.0.
    Of course I see it differently. Developer Tom will want to work in the
    the environment that works best for him. He does work in the browser
    and perhaps some on the server. He just wants something that works
    well in both places. He uses the RequireJS package tool to fetch the
    packages/modules he needs, and since RequireJS works well in the
    browser and in Node/Rhino, he can work in both environments easily. He
    decides to make some reusable components, and will do so using the
    RequireJS format, since that is what he uses for his applications. He
    shares those with others via publication in a package directory that
    RequireJS uses.

    In other words, I am betting that there are more browser-oriented
    JavaScript users that will slide into packages/modules via RequireJS
    than server developers who might happen across CommonJS.

    For Dojo 2.0, if the module loader is not part of the project (and I
    do not think it should be), then the Dojo modules have the flexibility
    to choose the module format they like best, or try to support more
    than one. I suggest just using RequireJS since it works well on the
    browser and server :). It does raise the question about what is
    considered "dojo" in that world. I expect it will be a set of
    packages, maybe one package for what is "base" now, plus packages for
    other "core" pieces. "dijit" could be one or a couple packages.
    "dojox" as a top level identifier could go away, and just have
    specifically named modules like "gfx", and indicate CLA-approved code
    via the package directory/package.json files. Someone who has more at
    stake in dojox is probably better positioned to answer that than me.

    I hope that Dojo 2.0 suggests using the RequireJS package manager to
    get the Dojo modules. I also prefer code samples that use the
    RequireJS require since it will work great in the browser and on the
    server and you can reliably export functions as module values. More
    topics for discussion in a Dojo 2.0 world.

    James
  • Eugene Lazutkin at Aug 25, 2010 at 3:55 am
    Basically it is a bet AKA the wishful thinking that people abandon the
    status quo, which is: CommonJS is firmly entrenched on server-side
    platforms. Don't take me wrong: I would love to see it happening.

    And I truly love that you had a positive user story. You should put it
    in a doc somewhere, and separately explain how to use RequireJS with
    different non-browser-based engines, how it co-exists with whatever is
    bundled, and how you can use it to load just one module you need, using
    the default loader for the rest.

    While I think that you are correct in your assumption that majority of
    serious JS developers will come from the browser side, I am not
    convinced that they will choose some third-party tool to load stuff,
    when it already has a loader. And even if they come from a browser side,
    it doesn't mean they will come from Dojo meaning that they are:

    a) Unfamiliar with loaders, because their experience (e.g., with jQuery)
    didn't include one.

    b) Familiar with the loader of their favorite framework, e.g., YUI, and
    they will naturally want to use something similar in new environments.

    Is there any indications that the choice will be favorable?

    Speaking of YUI. When I attended TXJS (BTW, great conf. with a lot of
    people all around US + a few came from Europe) I went to hear Tom
    Hughes-Croucher, who is a Yahoo! evangelist. The talk was about SSJS.
    Look what kind of talks he gives:
    http://speakerrate.com/speakers/1487-tom-hughes-croucher --- small
    selection:

    * Say Hello to Node.JS
    * JavaScript Everywhere! Creating a 100% JavaScript web stack
    * Running YUI3 on Node.js
    * End to end JavaScript: From Sever to Client

    If you can run YUI3 on Node.js now (see above), expect more from them,
    including a loader too --- sooner or later they will have a need for that.

    Speaking of Node.js. Do you know that it has more than one package
    manager? Kiwi is the other one. From
    http://www.synchrosinteractive.com/blog/1-software/35-sharing-open-source-nodejs-libraries-with-node-package-manager-npm
    :

    "So, what's wrong with kiwi? Not that much really, except that it's not
    the "popular" one and, with package managers, that is crucial. I asked
    around in the #Node.js IRC channel and was informed that Node Package
    Manager (npm) is the latest and greatest and has the most momentum and
    community support. At least for the foreseeable future, npm is what you
    want to use."

    What is the strategy to move on npm's turf? If you build it they will
    come? Seriously, I would love to hear about it.

    Cheers,

    Eugene Lazutkin
    Dojo Toolkit, Committer
    http://lazutkin.com/
    On 08/24/2010 10:54 PM, James Burke wrote:
    On Tue, Aug 24, 2010 at 6:48 PM, Eugene Lazutkin wrote:
    Allow me to illustrate the James' vision of the Dojo 2.0.
    Of course I see it differently. Developer Tom will want to work in the
    the environment that works best for him. He does work in the browser
    and perhaps some on the server. He just wants something that works
    well in both places. He uses the RequireJS package tool to fetch the
    packages/modules he needs, and since RequireJS works well in the
    browser and in Node/Rhino, he can work in both environments easily. He
    decides to make some reusable components, and will do so using the
    RequireJS format, since that is what he uses for his applications. He
    shares those with others via publication in a package directory that
    RequireJS uses.

    In other words, I am betting that there are more browser-oriented
    JavaScript users that will slide into packages/modules via RequireJS
    than server developers who might happen across CommonJS.

    For Dojo 2.0, if the module loader is not part of the project (and I
    do not think it should be), then the Dojo modules have the flexibility
    to choose the module format they like best, or try to support more
    than one. I suggest just using RequireJS since it works well on the
    browser and server :). It does raise the question about what is
    considered "dojo" in that world. I expect it will be a set of
    packages, maybe one package for what is "base" now, plus packages for
    other "core" pieces. "dijit" could be one or a couple packages.
    "dojox" as a top level identifier could go away, and just have
    specifically named modules like "gfx", and indicate CLA-approved code
    via the package directory/package.json files. Someone who has more at
    stake in dojox is probably better positioned to answer that than me.

    I hope that Dojo 2.0 suggests using the RequireJS package manager to
    get the Dojo modules. I also prefer code samples that use the
    RequireJS require since it will work great in the browser and on the
    server and you can reliably export functions as module values. More
    topics for discussion in a Dojo 2.0 world.

    James
  • Kris Zyp at Aug 25, 2010 at 9:17 am
    For the developer that wants to reuse modules across the client and
    server using a single familiar format, we already have a great server
    side story (within the Dojo Foundation as a sub-project of Persevere).
    Transporter [1] will automatically CommonJS modules with the CommonJS
    transport format for use in the browser when modules are requested. This
    gives developer the ability to write modules in de facto standard
    format, plain CommonJS, while still being able to load them (and their
    deps) asynchronous in the browser, using any CommonJS transport
    compatible loader (Yabble, RequireJS, and transporter comes with one).
    And we don't have to pin our hopes on trying to get the whole SSJS world
    to switch to using RequireJS (doesn't seem likely that Node users are
    going to make a mass move to RequireJS modules). Now all we need to
    complete the story is Dojo support for the CommonJS transport format (or
    RequireJS API, which is essentially a superset of the CommonJS transport
    format), and Dojo can participate in the fun.

    You probably would not want to use a package manager to handle the task
    of wrapping CommonJS modules, at least in development, running a command
    line tool everytime you change a file would be a terribly inefficient.
    The package manger would be all that different than transporter, just
    the command line interface for the same basic code.

    Also some random other options and tools: On the server side, Nodules
    (another Persevere sub-project) supports CommonJS transport format *and*
    plain CommonJS modules, so if a user wants to write shared modules in
    callback style (so no transformation is required) and server-only
    components in plain CommonJS they can do so using that server side
    loader. Also, on the client side, Yabble supports the CommonJS transport
    format (like RequireJS), but also supports direct loading of CommonJS
    modules (without any wrapping). The direct loading of CommonJS modules
    requires sync loading, which I know we are wanting to move away from,
    but it is debatable whether we want to retain some support for it for
    dev and compatibility purposes. I don't know if Yabble is legitimate
    option (alternate to RequireJS) for inclusion in Dojo or not.

    [1] http://github.com/kriszyp/transporter
    [2] http://github.com/kriszyp/nodules
    [3] http://github.com/jbrantly/yabble

    Kris
    On 8/25/2010 1:55 AM, Eugene Lazutkin wrote:
    Basically it is a bet AKA the wishful thinking that people abandon the
    status quo, which is: CommonJS is firmly entrenched on server-side
    platforms. Don't take me wrong: I would love to see it happening.

    And I truly love that you had a positive user story. You should put it
    in a doc somewhere, and separately explain how to use RequireJS with
    different non-browser-based engines, how it co-exists with whatever is
    bundled, and how you can use it to load just one module you need, using
    the default loader for the rest.

    While I think that you are correct in your assumption that majority of
    serious JS developers will come from the browser side, I am not
    convinced that they will choose some third-party tool to load stuff,
    when it already has a loader. And even if they come from a browser side,
    it doesn't mean they will come from Dojo meaning that they are:

    a) Unfamiliar with loaders, because their experience (e.g., with jQuery)
    didn't include one.

    b) Familiar with the loader of their favorite framework, e.g., YUI, and
    they will naturally want to use something similar in new environments.

    Is there any indications that the choice will be favorable?

    Speaking of YUI. When I attended TXJS (BTW, great conf. with a lot of
    people all around US + a few came from Europe) I went to hear Tom
    Hughes-Croucher, who is a Yahoo! evangelist. The talk was about SSJS.
    Look what kind of talks he gives:
    http://speakerrate.com/speakers/1487-tom-hughes-croucher --- small
    selection:

    * Say Hello to Node.JS
    * JavaScript Everywhere! Creating a 100% JavaScript web stack
    * Running YUI3 on Node.js
    * End to end JavaScript: From Sever to Client

    If you can run YUI3 on Node.js now (see above), expect more from them,
    including a loader too --- sooner or later they will have a need for that.

    Speaking of Node.js. Do you know that it has more than one package
    manager? Kiwi is the other one. From
    http://www.synchrosinteractive.com/blog/1-software/35-sharing-open-source-nodejs-libraries-with-node-package-manager-npm
    :

    "So, what's wrong with kiwi? Not that much really, except that it's not
    the "popular" one and, with package managers, that is crucial. I asked
    around in the #Node.js IRC channel and was informed that Node Package
    Manager (npm) is the latest and greatest and has the most momentum and
    community support. At least for the foreseeable future, npm is what you
    want to use."

    What is the strategy to move on npm's turf? If you build it they will
    come? Seriously, I would love to hear about it.

    Cheers,

    Eugene Lazutkin
    Dojo Toolkit, Committer
    http://lazutkin.com/
    On 08/24/2010 10:54 PM, James Burke wrote:
    On Tue, Aug 24, 2010 at 6:48 PM, Eugene Lazutkin
    wrote:
    Allow me to illustrate the James' vision of the Dojo 2.0.
    Of course I see it differently. Developer Tom will want to work in the
    the environment that works best for him. He does work in the browser
    and perhaps some on the server. He just wants something that works
    well in both places. He uses the RequireJS package tool to fetch the
    packages/modules he needs, and since RequireJS works well in the
    browser and in Node/Rhino, he can work in both environments easily. He
    decides to make some reusable components, and will do so using the
    RequireJS format, since that is what he uses for his applications. He
    shares those with others via publication in a package directory that
    RequireJS uses.
    In other words, I am betting that there are more browser-oriented
    JavaScript users that will slide into packages/modules via RequireJS
    than server developers who might happen across CommonJS.
    For Dojo 2.0, if the module loader is not part of the project (and I
    do not think it should be), then the Dojo modules have the flexibility
    to choose the module format they like best, or try to support more
    than one. I suggest just using RequireJS since it works well on the
    browser and server :). It does raise the question about what is
    considered "dojo" in that world. I expect it will be a set of
    packages, maybe one package for what is "base" now, plus packages for
    other "core" pieces. "dijit" could be one or a couple packages.
    "dojox" as a top level identifier could go away, and just have
    specifically named modules like "gfx", and indicate CLA-approved code
    via the package directory/package.json files. Someone who has more at
    stake in dojox is probably better positioned to answer that than me.
    I hope that Dojo 2.0 suggests using the RequireJS package manager to
    get the Dojo modules. I also prefer code samples that use the
    RequireJS require since it will work great in the browser and on the
    server and you can reliably export functions as module values. More
    topics for discussion in a Dojo 2.0 world.
    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
  • Eugene Lazutkin at Aug 25, 2010 at 2:05 pm
    The server-side story is important too. What kind of server
    environments/frameworks are supported now, and planned to be supported
    in a foreseeable future? For example, I had a good experience with
    AppEngineJS (Rhino, Ringo, Nitro, AppEngine) --- can I use the mentioned
    solutions on this platform?

    In general it would be really nice to cover some common use cases, and
    have a special section covering Dojo on different platforms --- what is
    supported, what tools are available, best practices with such solutions,
    and so on.

    Cheers,

    Eugene Lazutkin
    Dojo Toolkit, Committer
    http://lazutkin.com/
    On 08/25/2010 08:17 AM, Kris Zyp wrote:
    For the developer that wants to reuse modules across the client and
    server using a single familiar format, we already have a great server
    side story (within the Dojo Foundation as a sub-project of Persevere).
    Transporter [1] will automatically CommonJS modules with the CommonJS
    transport format for use in the browser when modules are requested. This
    gives developer the ability to write modules in de facto standard
    format, plain CommonJS, while still being able to load them (and their
    deps) asynchronous in the browser, using any CommonJS transport
    compatible loader (Yabble, RequireJS, and transporter comes with one).
    And we don't have to pin our hopes on trying to get the whole SSJS world
    to switch to using RequireJS (doesn't seem likely that Node users are
    going to make a mass move to RequireJS modules). Now all we need to
    complete the story is Dojo support for the CommonJS transport format (or
    RequireJS API, which is essentially a superset of the CommonJS transport
    format), and Dojo can participate in the fun.

    You probably would not want to use a package manager to handle the task
    of wrapping CommonJS modules, at least in development, running a command
    line tool everytime you change a file would be a terribly inefficient.
    The package manger would be all that different than transporter, just
    the command line interface for the same basic code.

    Also some random other options and tools: On the server side, Nodules
    (another Persevere sub-project) supports CommonJS transport format *and*
    plain CommonJS modules, so if a user wants to write shared modules in
    callback style (so no transformation is required) and server-only
    components in plain CommonJS they can do so using that server side
    loader. Also, on the client side, Yabble supports the CommonJS transport
    format (like RequireJS), but also supports direct loading of CommonJS
    modules (without any wrapping). The direct loading of CommonJS modules
    requires sync loading, which I know we are wanting to move away from,
    but it is debatable whether we want to retain some support for it for
    dev and compatibility purposes. I don't know if Yabble is legitimate
    option (alternate to RequireJS) for inclusion in Dojo or not.

    [1] http://github.com/kriszyp/transporter
    [2] http://github.com/kriszyp/nodules
    [3] http://github.com/jbrantly/yabble

    Kris
    On 8/25/2010 1:55 AM, Eugene Lazutkin wrote:
    Basically it is a bet AKA the wishful thinking that people abandon the
    status quo, which is: CommonJS is firmly entrenched on server-side
    platforms. Don't take me wrong: I would love to see it happening.

    And I truly love that you had a positive user story. You should put it
    in a doc somewhere, and separately explain how to use RequireJS with
    different non-browser-based engines, how it co-exists with whatever is
    bundled, and how you can use it to load just one module you need, using
    the default loader for the rest.

    While I think that you are correct in your assumption that majority of
    serious JS developers will come from the browser side, I am not
    convinced that they will choose some third-party tool to load stuff,
    when it already has a loader. And even if they come from a browser side,
    it doesn't mean they will come from Dojo meaning that they are:

    a) Unfamiliar with loaders, because their experience (e.g., with jQuery)
    didn't include one.

    b) Familiar with the loader of their favorite framework, e.g., YUI, and
    they will naturally want to use something similar in new environments.

    Is there any indications that the choice will be favorable?

    Speaking of YUI. When I attended TXJS (BTW, great conf. with a lot of
    people all around US + a few came from Europe) I went to hear Tom
    Hughes-Croucher, who is a Yahoo! evangelist. The talk was about SSJS.
    Look what kind of talks he gives:
    http://speakerrate.com/speakers/1487-tom-hughes-croucher --- small
    selection:

    * Say Hello to Node.JS
    * JavaScript Everywhere! Creating a 100% JavaScript web stack
    * Running YUI3 on Node.js
    * End to end JavaScript: From Sever to Client

    If you can run YUI3 on Node.js now (see above), expect more from them,
    including a loader too --- sooner or later they will have a need for that.

    Speaking of Node.js. Do you know that it has more than one package
    manager? Kiwi is the other one. From
    http://www.synchrosinteractive.com/blog/1-software/35-sharing-open-source-nodejs-libraries-with-node-package-manager-npm
    :

    "So, what's wrong with kiwi? Not that much really, except that it's not
    the "popular" one and, with package managers, that is crucial. I asked
    around in the #Node.js IRC channel and was informed that Node Package
    Manager (npm) is the latest and greatest and has the most momentum and
    community support. At least for the foreseeable future, npm is what you
    want to use."

    What is the strategy to move on npm's turf? If you build it they will
    come? Seriously, I would love to hear about it.

    Cheers,

    Eugene Lazutkin
    Dojo Toolkit, Committer
    http://lazutkin.com/
    On 08/24/2010 10:54 PM, James Burke wrote:
    On Tue, Aug 24, 2010 at 6:48 PM, Eugene Lazutkin
    wrote:
    Allow me to illustrate the James' vision of the Dojo 2.0.
    Of course I see it differently. Developer Tom will want to work in the
    the environment that works best for him. He does work in the browser
    and perhaps some on the server. He just wants something that works
    well in both places. He uses the RequireJS package tool to fetch the
    packages/modules he needs, and since RequireJS works well in the
    browser and in Node/Rhino, he can work in both environments easily. He
    decides to make some reusable components, and will do so using the
    RequireJS format, since that is what he uses for his applications. He
    shares those with others via publication in a package directory that
    RequireJS uses.
    In other words, I am betting that there are more browser-oriented
    JavaScript users that will slide into packages/modules via RequireJS
    than server developers who might happen across CommonJS.
    For Dojo 2.0, if the module loader is not part of the project (and I
    do not think it should be), then the Dojo modules have the flexibility
    to choose the module format they like best, or try to support more
    than one. I suggest just using RequireJS since it works well on the
    browser and server :). It does raise the question about what is
    considered "dojo" in that world. I expect it will be a set of
    packages, maybe one package for what is "base" now, plus packages for
    other "core" pieces. "dijit" could be one or a couple packages.
    "dojox" as a top level identifier could go away, and just have
    specifically named modules like "gfx", and indicate CLA-approved code
    via the package directory/package.json files. Someone who has more at
    stake in dojox is probably better positioned to answer that than me.
    I hope that Dojo 2.0 suggests using the RequireJS package manager to
    get the Dojo modules. I also prefer code samples that use the
    RequireJS require since it will work great in the browser and on the
    server and you can reliably export functions as module values. More
    topics for discussion in a Dojo 2.0 world.
    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
  • James Burke at Aug 25, 2010 at 1:51 pm

    On Wed, Aug 25, 2010 at 12:55 AM, Eugene Lazutkin wrote:
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1

    Basically it is a bet AKA the wishful thinking that people abandon the
    status quo, which is: CommonJS is firmly entrenched on server-side
    platforms. Don't take me wrong: I would love to see it happening.
    And yet as you indicate Yahoo is still supporting using its loader
    approach on the server. I also think among Front End devs, CommonJS is
    not well known. And even within CommonJS implementations, it still
    feels like it is hard to share modules, particularly given how
    different platforms allow setting the module export differently.

    And I do not need people to wholesale abandon CommonJS modules. I want
    to make sure whatever system I make can consume them, but hopefully
    provide a better path going forward.

    CommonJS might have an early lead, but I do not see the race as over.
    I would have liked to have been further along with a complete
    alternative, but such is life. Now that I am in it though, I am not
    willing to admit defeat yet. But that is just me, the rest of the Dojo
    community is free to choose otherwise.
    What is the strategy to move on npm's turf? If you build it they will
    come? Seriously, I would love to hear about it.
    I plan on being able to consume the packages listed in npm's
    directory. So, folks that like npm and Node can continue to put their
    utility modules in npm's registry, and RequireJS-based projects will
    be able to use them.

    Similarly, I want to support pulling scripts from the jQuery plugin
    repository. If it makes sense later, MooForge too. RequireJS has the
    advantage over other CommonJS-in-the-browser loaders in that it can
    load regular scripts too.

    James
  • Tom Trenka at Aug 25, 2010 at 11:40 am
    Couple of things to think about here...

    1. I am *not* advocating that Dojo 2.0 be a browser-only toolkit. If that
    were the case, I would have jumped on the feature detection bandwagon a
    while ago.
    2. I am also *not* saying that using a package format is a bad idea either,
    though I'm still undecided on the way CommonJS has defined it.

    What I *am* saying is that if there is to be packages, we *cannot* force
    people to download and use a tool just to get started with anything that is
    not "in the box" for Dojo 2.0. If there's no alternate ways of setting up
    packages, such as easy manual setup, a web-based tool (i.e. zero-install),
    or any other methods (which I think is part of the point of this thread),
    then we lose.

    We have to remember that we are not the only consumers of the toolkit, and
    more tools is not the answer to common problems that developers using DTK
    have. I think the confusion surrounding the current build system--and the
    common answer we give to problems that say "just do a build"--is a perfect
    example of this.

    If there is to be a package manager, that's fine. At the same time, the
    submitted patch to allow a developer to load a package on the fly should be
    there...there's no reason why someone shouldn't be able to just get
    up-and-running without having to use an auxiliary set of tools to do so.
    Javascript overall wins because all you really need is a browser, a text
    editor and (probably) a web server; the more tools we inject into that
    process, the more we place barriers in front of a typical developer to use
    what we've written and I would submit that that is bad, bad, bad.

    So to reiterate my point: any external/auxiliary tools that we may offer to
    aid in the development process *must* be optional; if possible, we should
    offer zero-install web-based tools that any developer can use easily.

    Regards--
    Tom
    On Tue, Aug 24, 2010 at 8:48 PM, Eugene Lazutkin wrote:

    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1

    Allow me to illustrate the James' vision of the Dojo 2.0.

    Imagine that there a developer, let's call him Tom, who decided to
    produce a package, which can be used in browsers, and on other
    platforms. The example of such module can be something like
    dojox.collections, dojox.encoding, dojox.math, dojox.string, or any
    other generic things: json, xml, date, color utilities, or some generic
    language-specific stuff. So naturally Tom wants to cover a lot with
    less. And he thinks: "Wait a second! I can develop a CommonJS package,
    so it can be used by server-side folks immediately, *and* I can import
    it as a RequireJS module later, to use in RequireJS-based frameworks,
    including Dojo 2.0!" The decision is made and Tom faces a simple task
    - --- developing the package in question. Amazingly he cannot do it in the
    browser with Dojo 2.0, and he is forced to use any other CommonJS
    platform no matter how terrible it is. But he knows that if he needs to
    change anything, he works with one format: CommonJS. The rest is
    automatic due to the RequireJS' importer (part of the package manager).

    Alternatively he develops everything with Dojo 2.0 and back-ports it to
    CommonJS manually, which appears to be relatively simple. Now Tom
    supports two versions. Every time he makes a fix/enhancement in the
    master package, he has to back-port the change. If he found a
    platform-specific error, and forced to modify the CommonJS copy first,
    he'll have to propagate it forth too.

    See Tom run. To some other platform. From Dojo 2.0. Run, Tom, run!

    In this scenario I see Dojo an exclusively browser-based toolkit. I
    don't say it is bad, just that it contradicts our stated directions to
    establish Dojo as a multi-platform JavaScript toolkit. When I asked if
    we support anything outside of a browser in the Dojo 2.0 thread, we got
    3 votes for Yes (Adam, James, and I), 0 votes for No, and 0 for Maybe.
    It concerns me a lot.

    Cheers,

    Eugene Lazutkin
    http://lazutkin.com/

    PS: If somebody feels that my CommonJS story is incorrect, please tell
    me where I am wrong --- I hope it is all a misunderstanding, or I am
    missing something.

    On 08/24/2010 02:58 PM, Tom Trenka wrote:
    "XHR+eval loading has
    extra baggage, and frankly I believe CommonJS module format optimized
    for the wrong things. The typing differences between the formats are
    not that great, particularly when considering the spent trying to
    debug evaled code in the browser. Sure, not every developer needs to
    do work in the browser, but many do, and having a consistent format
    that works well across environments in the end is more important."

    I would submit (having a lot of "old" experience on this one) that the
    needs of a server dev is not the same needs of a client dev. Trying to
    meld the two is already leading to trouble (witness the traffic on these
    threads), and maybe...just maybe...having a single loader system for
    both is not such a good idea.

    It sounds to me like James is advocating a Linux-like package system
    (i.e. one shot, aimed at a developer with the tools needed to actually
    grab said package), and everyone else is remembering that (before
    loaders like Dojo, Closure, and LAB) you just needed to ask for the
    specific resource. I can understand (and agree with) most of the
    stories told here, but the reality is that the higher the barrier to
    entry, the less people will work with the system in question, unless
    forced to.

    I personally don't want to be associated with "being forced to", so I'd
    have to agree with Kris on this one, but I'd suggest that maybe the
    whole package.json is not the answer either; elements of it might be.
    Lots of good arguments on all sides, and I'm enjoying the debate =) In
    the end though, barrier-to-entry is key, and *any* tools that we require
    people to try to download and use are a barrier to entry, especially if
    the devs in question feel that they are forced to use said tools.

    Dumb question: has anyone ever considered a server-side solution (not
    necessarily Rhino-based) that might collect a set of JS files from a
    development app (aka HTML file or whatever) and attempt to assemble a
    build based on that? I threw that at Alex a long time ago, but it got
    lost in the noise.

    Regards--
    Tom

    On Tue, Aug 24, 2010 at 2:06 PM, James Burke <jburke at dojotoolkit.org
    wrote:

    On Tue, Aug 24, 2010 at 11:44 AM, Eugene Lazutkin
    <eugene at lazutkin.com wrote:
    Hmm. Why? Censorship? ;-) Just by saying that you are cutting off users,
    who want to develop traditional CommonJS modules with Dojo. Is it really
    worth it?

    I suspect that if/when CommonJS goes to masses, the said masses
    would
    prefer a browser environment to debug modules --- mostly because of
    Firebug and other debugging tools already available and relatively
    mature. By cutting them off we serve them to our "competitors". Even now
    there are several tools of different quality, which load regular
    CommonJS modules into browser.
    Right, and all of those loaders are inferior. XHR+eval loading has
    extra baggage, and frankly I believe CommonJS module format optimized
    for the wrong things. The typing differences between the formats are
    not that great, particularly when considering the spent trying to
    debug evaled code in the browser. Sure, not every developer needs to
    do work in the browser, but many do, and having a consistent format
    that works well across environments in the end is more important. And
    the amount of typing is really not that much different. And for crying
    out loud, JavaScript started in the browser. Damn it. OK, flame off.

    In short, I am doing this package work because I believe the RequireJS
    syntax is better. I can reuse CommonJS modules but those modules are
    not as flexible or JavaScripty as RequireJS ones (setting exported
    value to a function is standard in RequireJS). By being able to reuse
    and transform CommonJS modules for a system that works well in the
    browser, I am making a bet that it will become the dominate syntax for
    modules.

    I appreciate though that it may ultimately end in failure, which is
    fine. I do not think it will be utter failure, it will just mean that
    CommonJS modules will always be something different, but the ability
    to transform CommonJS modules means that RequireJS-based projects can
    still thrive on their own. If CommonJS gets a standard way to set
    exports, there can be a tool to convert back to CommonJS format too.

    I can fully appreciate if the Dojo community does not want to come
    along for that ride. So feel free to disagree, so we can work out what
    is best for the Dojo folks.
    Somehow I missed app.js --- what is it? Could you give me a link so
    I
    can read up on it? Both links in your original email do not
    mention it,
    and I couldn't find any mention of it in this thread.
    I keep thinking app.js, but I used main.js in the package writeup here:

    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.md
    So please s/app.js/main.js/, sorry for mixing it up.

    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    <mailto:dojo-contributors at mail.dojotoolkit.org>
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors




    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.4.10 (GNU/Linux)
    Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

    iEYEARECAAYFAkx0dmkACgkQY214tZwSfCt4iACgwDnJYApO+WMksO+p1FUvNsZh
    ko4Anj4XWsTOMIB1dOkXZkwPZ2FVA2h/
    =aG9k
    -----END PGP SIGNATURE-----
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.dojotoolkit.org/pipermail/dojo-contributors/attachments/20100825/7c8c0d98/attachment.htm
  • Tom Trenka at Aug 25, 2010 at 12:11 pm
    (Subnote to the above email: or a JS server-side stack ;))

    -- Tom
    On Wed, Aug 25, 2010 at 10:40 AM, Tom Trenka wrote:

    Couple of things to think about here...

    1. I am *not* advocating that Dojo 2.0 be a browser-only toolkit. If that
    were the case, I would have jumped on the feature detection bandwagon a
    while ago.
    2. I am also *not* saying that using a package format is a bad idea
    either, though I'm still undecided on the way CommonJS has defined it.

    What I *am* saying is that if there is to be packages, we *cannot* force
    people to download and use a tool just to get started with anything that is
    not "in the box" for Dojo 2.0. If there's no alternate ways of setting up
    packages, such as easy manual setup, a web-based tool (i.e. zero-install),
    or any other methods (which I think is part of the point of this thread),
    then we lose.

    We have to remember that we are not the only consumers of the toolkit, and
    more tools is not the answer to common problems that developers using DTK
    have. I think the confusion surrounding the current build system--and the
    common answer we give to problems that say "just do a build"--is a perfect
    example of this.

    If there is to be a package manager, that's fine. At the same time, the
    submitted patch to allow a developer to load a package on the fly should be
    there...there's no reason why someone shouldn't be able to just get
    up-and-running without having to use an auxiliary set of tools to do so.
    Javascript overall wins because all you really need is a browser, a text
    editor and (probably) a web server; the more tools we inject into that
    process, the more we place barriers in front of a typical developer to use
    what we've written and I would submit that that is bad, bad, bad.

    So to reiterate my point: any external/auxiliary tools that we may offer to
    aid in the development process *must* be optional; if possible, we should
    offer zero-install web-based tools that any developer can use easily.

    Regards--
    Tom

    On Tue, Aug 24, 2010 at 8:48 PM, Eugene Lazutkin wrote:

    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1

    Allow me to illustrate the James' vision of the Dojo 2.0.

    Imagine that there a developer, let's call him Tom, who decided to
    produce a package, which can be used in browsers, and on other
    platforms. The example of such module can be something like
    dojox.collections, dojox.encoding, dojox.math, dojox.string, or any
    other generic things: json, xml, date, color utilities, or some generic
    language-specific stuff. So naturally Tom wants to cover a lot with
    less. And he thinks: "Wait a second! I can develop a CommonJS package,
    so it can be used by server-side folks immediately, *and* I can import
    it as a RequireJS module later, to use in RequireJS-based frameworks,
    including Dojo 2.0!" The decision is made and Tom faces a simple task
    - --- developing the package in question. Amazingly he cannot do it in the
    browser with Dojo 2.0, and he is forced to use any other CommonJS
    platform no matter how terrible it is. But he knows that if he needs to
    change anything, he works with one format: CommonJS. The rest is
    automatic due to the RequireJS' importer (part of the package manager).

    Alternatively he develops everything with Dojo 2.0 and back-ports it to
    CommonJS manually, which appears to be relatively simple. Now Tom
    supports two versions. Every time he makes a fix/enhancement in the
    master package, he has to back-port the change. If he found a
    platform-specific error, and forced to modify the CommonJS copy first,
    he'll have to propagate it forth too.

    See Tom run. To some other platform. From Dojo 2.0. Run, Tom, run!

    In this scenario I see Dojo an exclusively browser-based toolkit. I
    don't say it is bad, just that it contradicts our stated directions to
    establish Dojo as a multi-platform JavaScript toolkit. When I asked if
    we support anything outside of a browser in the Dojo 2.0 thread, we got
    3 votes for Yes (Adam, James, and I), 0 votes for No, and 0 for Maybe.
    It concerns me a lot.

    Cheers,

    Eugene Lazutkin
    http://lazutkin.com/

    PS: If somebody feels that my CommonJS story is incorrect, please tell
    me where I am wrong --- I hope it is all a misunderstanding, or I am
    missing something.

    On 08/24/2010 02:58 PM, Tom Trenka wrote:
    "XHR+eval loading has
    extra baggage, and frankly I believe CommonJS module format optimized
    for the wrong things. The typing differences between the formats are
    not that great, particularly when considering the spent trying to
    debug evaled code in the browser. Sure, not every developer needs to
    do work in the browser, but many do, and having a consistent format
    that works well across environments in the end is more important."

    I would submit (having a lot of "old" experience on this one) that the
    needs of a server dev is not the same needs of a client dev. Trying to
    meld the two is already leading to trouble (witness the traffic on these
    threads), and maybe...just maybe...having a single loader system for
    both is not such a good idea.

    It sounds to me like James is advocating a Linux-like package system
    (i.e. one shot, aimed at a developer with the tools needed to actually
    grab said package), and everyone else is remembering that (before
    loaders like Dojo, Closure, and LAB) you just needed to ask for the
    specific resource. I can understand (and agree with) most of the
    stories told here, but the reality is that the higher the barrier to
    entry, the less people will work with the system in question, unless
    forced to.

    I personally don't want to be associated with "being forced to", so I'd
    have to agree with Kris on this one, but I'd suggest that maybe the
    whole package.json is not the answer either; elements of it might be.
    Lots of good arguments on all sides, and I'm enjoying the debate =) In
    the end though, barrier-to-entry is key, and *any* tools that we require
    people to try to download and use are a barrier to entry, especially if
    the devs in question feel that they are forced to use said tools.

    Dumb question: has anyone ever considered a server-side solution (not
    necessarily Rhino-based) that might collect a set of JS files from a
    development app (aka HTML file or whatever) and attempt to assemble a
    build based on that? I threw that at Alex a long time ago, but it got
    lost in the noise.

    Regards--
    Tom

    On Tue, Aug 24, 2010 at 2:06 PM, James Burke <jburke at dojotoolkit.org
    wrote:

    On Tue, Aug 24, 2010 at 11:44 AM, Eugene Lazutkin
    <eugene at lazutkin.com wrote:
    Hmm. Why? Censorship? ;-) Just by saying that you are cutting off users,
    who want to develop traditional CommonJS modules with Dojo. Is it really
    worth it?

    I suspect that if/when CommonJS goes to masses, the said masses
    would
    prefer a browser environment to debug modules --- mostly because
    of
    Firebug and other debugging tools already available and relatively
    mature. By cutting them off we serve them to our "competitors". Even now
    there are several tools of different quality, which load regular
    CommonJS modules into browser.
    Right, and all of those loaders are inferior. XHR+eval loading has
    extra baggage, and frankly I believe CommonJS module format optimized
    for the wrong things. The typing differences between the formats are
    not that great, particularly when considering the spent trying to
    debug evaled code in the browser. Sure, not every developer needs to
    do work in the browser, but many do, and having a consistent format
    that works well across environments in the end is more important. And
    the amount of typing is really not that much different. And for crying
    out loud, JavaScript started in the browser. Damn it. OK, flame off.

    In short, I am doing this package work because I believe the RequireJS
    syntax is better. I can reuse CommonJS modules but those modules are
    not as flexible or JavaScripty as RequireJS ones (setting exported
    value to a function is standard in RequireJS). By being able to reuse
    and transform CommonJS modules for a system that works well in the
    browser, I am making a bet that it will become the dominate syntax for
    modules.

    I appreciate though that it may ultimately end in failure, which is
    fine. I do not think it will be utter failure, it will just mean that
    CommonJS modules will always be something different, but the ability
    to transform CommonJS modules means that RequireJS-based projects can
    still thrive on their own. If CommonJS gets a standard way to set
    exports, there can be a tool to convert back to CommonJS format too.

    I can fully appreciate if the Dojo community does not want to come
    along for that ride. So feel free to disagree, so we can work out what
    is best for the Dojo folks.
    Somehow I missed app.js --- what is it? Could you give me a link
    so I
    can read up on it? Both links in your original email do not
    mention it,
    and I couldn't find any mention of it in this thread.
    I keep thinking app.js, but I used main.js in the package writeup here:

    http://github.com/jrburke/requirejs/blob/master/docs/design/packages.md
    So please s/app.js/main.js/, sorry for mixing it up.

    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    <mailto:dojo-contributors at mail.dojotoolkit.org>
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors




    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.4.10 (GNU/Linux)
    Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

    iEYEARECAAYFAkx0dmkACgkQY214tZwSfCt4iACgwDnJYApO+WMksO+p1FUvNsZh
    ko4Anj4XWsTOMIB1dOkXZkwPZ2FVA2h/
    =aG9k
    -----END PGP SIGNATURE-----
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.dojotoolkit.org/pipermail/dojo-contributors/attachments/20100825/14b6abb2/attachment.htm
  • James Burke at Aug 25, 2010 at 4:41 pm

    2010/8/25 Tom Trenka <ttrenka at gmail.com>:
    What I *am* saying is that if there is to be packages, we *cannot* force
    people to download and use a tool just to get started with anything that is
    not "in the box" for Dojo 2.0. ?If there's no alternate ways of setting up
    packages, such as easy manual setup, a web-based tool (i.e. zero-install),
    or any other methods (which I think is part of the point of this thread),
    then we lose.
    A zero install option is easier if the package directory layout/path
    assumptions were easier to handle, and if legacy scripts did not have
    to be supported.

    I can see a config option for RequireJS that says something like
    "onlyPackages": true, which indicates that only packages are in play,
    and if a specific package does not have a specific path mapping, then
    always look in a specific subfolder, call it "packages", and it
    assumes all of those packages follow the packageName/lib/main.js
    mapping for the module that represents require("packageName").

    This means though that the packages have to become *very* uniform.

    Remote packages are harder to support manually. It is easy enough to
    support a package mapping to the package's lib directory, but in the
    package spec, the mappings for a package's dependencies are in the
    package.json file. So discovering the mappings for nested dependencies
    is harder. This was easy in Dojo 1.x land where dependencies were
    mostly on dojo/dijit/dojox modules that were all co-located.

    And as I indicated before, trying to use XHR to load a remote
    package.json file is fraught with errors. Letting the developer
    struggle through those errors does not help the image of a toolkit. A
    command line tool is more elegant and developer-friendly. There can be
    some initial issues with installing Node or Java, but the other errors
    it prevents make it worth it. IMO :)

    You can try to convince the CommonJS folks to support an xdomain,
    browser-friendly package.json, but that is yet another thing to
    include in a package. I would not want to pin my hopes on that. I want
    the system for package/dependency discovery to work the same on server
    side and a browser-targeted project. Not every CommonJS developer will
    want to do a browser-friendly version, and I still want to have access
    to those packages. The moment there is mixed mode packages (some with
    package.json, some that include package.js), then those are more
    special cases for the developer to wade through.

    It is probably worth trying to think of another way to deliver and
    consume packages. The tricky parts with the current package layouts:
    - The path to the modules (the "lib" mapping) inside a package can be
    configurable in package.json
    - Mappings for some of the module IDs used in the modules are stored
    in package.json

    That approach makes things more decentralized, more flexible, but it
    does cause some issues for the browser where we just want one path to
    load a module, and some of that path/dependency knowledge is in the
    package.json.

    James
  • Tom Trenka at Aug 25, 2010 at 5:19 pm
    BTW, I think you're right (@James) on the idea that a package needs to be
    grabbed at dev time, and deployed as a part of a deployment; being able to
    download and unzip something in a stock directory setup and using even the
    current dojo.require would be fine, as long as it's not difficult to set up
    with any dependencies said package might have. I really don't think we need
    a tool for this, and I'm not entirely sure we need packages to be loadable
    by an application. And though I have not played with Node or CommonJS, I
    find it hard to believe that you would need to load a package full-on at
    run-time in those environments as well.

    Perhaps we're over-thinking this a bit?

    -- Tom
    On Wed, Aug 25, 2010 at 3:41 PM, James Burke wrote:

    2010/8/25 Tom Trenka <ttrenka at gmail.com>:
    What I *am* saying is that if there is to be packages, we *cannot* force
    people to download and use a tool just to get started with anything that is
    not "in the box" for Dojo 2.0. If there's no alternate ways of setting up
    packages, such as easy manual setup, a web-based tool (i.e.
    zero-install),
    or any other methods (which I think is part of the point of this thread),
    then we lose.
    A zero install option is easier if the package directory layout/path
    assumptions were easier to handle, and if legacy scripts did not have
    to be supported.

    I can see a config option for RequireJS that says something like
    "onlyPackages": true, which indicates that only packages are in play,
    and if a specific package does not have a specific path mapping, then
    always look in a specific subfolder, call it "packages", and it
    assumes all of those packages follow the packageName/lib/main.js
    mapping for the module that represents require("packageName").

    This means though that the packages have to become *very* uniform.

    Remote packages are harder to support manually. It is easy enough to
    support a package mapping to the package's lib directory, but in the
    package spec, the mappings for a package's dependencies are in the
    package.json file. So discovering the mappings for nested dependencies
    is harder. This was easy in Dojo 1.x land where dependencies were
    mostly on dojo/dijit/dojox modules that were all co-located.

    And as I indicated before, trying to use XHR to load a remote
    package.json file is fraught with errors. Letting the developer
    struggle through those errors does not help the image of a toolkit. A
    command line tool is more elegant and developer-friendly. There can be
    some initial issues with installing Node or Java, but the other errors
    it prevents make it worth it. IMO :)

    You can try to convince the CommonJS folks to support an xdomain,
    browser-friendly package.json, but that is yet another thing to
    include in a package. I would not want to pin my hopes on that. I want
    the system for package/dependency discovery to work the same on server
    side and a browser-targeted project. Not every CommonJS developer will
    want to do a browser-friendly version, and I still want to have access
    to those packages. The moment there is mixed mode packages (some with
    package.json, some that include package.js), then those are more
    special cases for the developer to wade through.

    It is probably worth trying to think of another way to deliver and
    consume packages. The tricky parts with the current package layouts:
    - The path to the modules (the "lib" mapping) inside a package can be
    configurable in package.json
    - Mappings for some of the module IDs used in the modules are stored
    in package.json

    That approach makes things more decentralized, more flexible, but it
    does cause some issues for the browser where we just want one path to
    load a module, and some of that path/dependency knowledge is in the
    package.json.

    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.dojotoolkit.org/pipermail/dojo-contributors/attachments/20100825/d0ceea4b/attachment-0001.htm
  • Dustin Machi at Aug 25, 2010 at 5:50 pm
    This is what that branch basically does. You are not required to use the package.json, and as i mentioned my original intention for 1.x was NOT to use package.json, but just assume a specific directory structure. If we dont' want to allow those directories to be configurable, it should be easy enough to eliminate the processing. Or we could even say registerPackageMappings() assumes the default unless you specifically tell it to load the package mapping in. Again, remember that package mappings do nothing for module or anything else in dojo, they simply do the registerModulePath() for you.

    If the choice is between being dynamically discovered at runtime (during development only) and requiring processing of packages after install or modification, I'm much more inclined to use the package.json dynamically since it has no impact on production and a large impact on development.

    Dustin
    On Aug 25, 2010, at 5:19 PM, Tom Trenka wrote:

    BTW, I think you're right (@James) on the idea that a package needs to be grabbed at dev time, and deployed as a part of a deployment; being able to download and unzip something in a stock directory setup and using even the current dojo.require would be fine, as long as it's not difficult to set up with any dependencies said package might have. I really don't think we need a tool for this, and I'm not entirely sure we need packages to be loadable by an application. And though I have not played with Node or CommonJS, I find it hard to believe that you would need to load a package full-on at run-time in those environments as well.

    Perhaps we're over-thinking this a bit?

    -- Tom

    On Wed, Aug 25, 2010 at 3:41 PM, James Burke wrote:
    2010/8/25 Tom Trenka <ttrenka at gmail.com>:
    What I *am* saying is that if there is to be packages, we *cannot* force
    people to download and use a tool just to get started with anything that is
    not "in the box" for Dojo 2.0. If there's no alternate ways of setting up
    packages, such as easy manual setup, a web-based tool (i.e. zero-install),
    or any other methods (which I think is part of the point of this thread),
    then we lose.
    A zero install option is easier if the package directory layout/path
    assumptions were easier to handle, and if legacy scripts did not have
    to be supported.

    I can see a config option for RequireJS that says something like
    "onlyPackages": true, which indicates that only packages are in play,
    and if a specific package does not have a specific path mapping, then
    always look in a specific subfolder, call it "packages", and it
    assumes all of those packages follow the packageName/lib/main.js
    mapping for the module that represents require("packageName").

    This means though that the packages have to become *very* uniform.

    Remote packages are harder to support manually. It is easy enough to
    support a package mapping to the package's lib directory, but in the
    package spec, the mappings for a package's dependencies are in the
    package.json file. So discovering the mappings for nested dependencies
    is harder. This was easy in Dojo 1.x land where dependencies were
    mostly on dojo/dijit/dojox modules that were all co-located.

    And as I indicated before, trying to use XHR to load a remote
    package.json file is fraught with errors. Letting the developer
    struggle through those errors does not help the image of a toolkit. A
    command line tool is more elegant and developer-friendly. There can be
    some initial issues with installing Node or Java, but the other errors
    it prevents make it worth it. IMO :)

    You can try to convince the CommonJS folks to support an xdomain,
    browser-friendly package.json, but that is yet another thing to
    include in a package. I would not want to pin my hopes on that. I want
    the system for package/dependency discovery to work the same on server
    side and a browser-targeted project. Not every CommonJS developer will
    want to do a browser-friendly version, and I still want to have access
    to those packages. The moment there is mixed mode packages (some with
    package.json, some that include package.js), then those are more
    special cases for the developer to wade through.

    It is probably worth trying to think of another way to deliver and
    consume packages. The tricky parts with the current package layouts:
    - The path to the modules (the "lib" mapping) inside a package can be
    configurable in package.json
    - Mappings for some of the module IDs used in the modules are stored
    in package.json

    That approach makes things more decentralized, more flexible, but it
    does cause some issues for the browser where we just want one path to
    load a module, and some of that path/dependency knowledge is in the
    package.json.

    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors

    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
  • James Burke at Aug 25, 2010 at 8:54 pm

    2010/8/25 Tom Trenka <ttrenka at gmail.com>:
    BTW, I think you're right (@James) on the idea that a package needs to be
    grabbed at dev time, and deployed as a part of a deployment; being able to
    download and unzip something in a stock directory setup and using even the
    current dojo.require would be fine, as long as it's not difficult to set up
    with any dependencies said package might have. ?I really don't think we need
    a tool for this, and I'm not entirely sure we need packages to be loadable
    by an application. ?And though I have not played with Node or CommonJS, I
    find it hard to believe that you would need to load a package full-on at
    run-time in those environments as well.
    Perhaps we're over-thinking this a bit?
    Feel free to outline a system you think will work.

    If you think that means dynamically loading package.json files and
    parsing them, even though the warnings i gave about issues with
    xdomain auth, and that the remote serve may not actually want to be a
    runtime file server for your app, feel free to try that out. Working
    code helps. Dustin's branch is a way to try that stuff out. I think it
    is not a full test since dealing with dojox modules is under our
    control. But perhaps something else can be demonstrated with the
    approach.

    I feel like we keep cycling over the same things, so I will stop
    talking about them for now. I'm going to code now, to see where things
    fall apart.

    James
  • Tom Trenka at Aug 25, 2010 at 9:15 pm
    Well, I think we're talking like ships passing in the night =) so I'll take
    another stab at it.

    If we are talking about a developer grabbing, say, dojox.string as a package
    to use in development, then I'd expect an interaction like MooForge: you
    find what you're looking for, you download a zip/tarball, you receive
    instructions on where to put it and how to use it, and you add it to your
    dev setup so that you can use it. And like the Forge, getting up and
    running with it is basically a breeze if you know the system that you're
    working with.

    At that point we're talking about defining a "package" as something that you
    can grab and use, but is a little smarter; it knows its own dependencies,
    and either comes with those dependencies or alerts you when those
    dependencies aren't there (and hopefully where to get them).

    If we are talking about module structure (i.e. how to load things such as
    the current dojox.charting as an example), I don't see any reason why we'd
    need anything special outside of whatever loader we end up using. In other
    words, we have two separate problems: how to get external-ish code into a
    place where it can be developed against, and how to continue the evolution
    of the DTK loading system. I think it'd be nice to be able to say "I want
    to try out X" by using a discovery/x-domain tool, but at the same time I
    think it's just as easy to download it, drop it into an app setup and go.
    You can always delete it ;)

    At that point it becomes a question of how to discover new packages, whether
    they will solve a problem for the dev or not, and making sure they are
    described/searchable (plus a trust factor).

    I guess in the end it depends on how someone defines "package". Your
    descriptions have made me think linux/python-ish things, and I don't know
    that's what we need to do...anyways, food for thought.

    Regards,
    Tom
    On Wed, Aug 25, 2010 at 7:54 PM, James Burke wrote:

    2010/8/25 Tom Trenka <ttrenka at gmail.com>:
    BTW, I think you're right (@James) on the idea that a package needs to be
    grabbed at dev time, and deployed as a part of a deployment; being able to
    download and unzip something in a stock directory setup and using even the
    current dojo.require would be fine, as long as it's not difficult to set up
    with any dependencies said package might have. I really don't think we need
    a tool for this, and I'm not entirely sure we need packages to be loadable
    by an application. And though I have not played with Node or CommonJS, I
    find it hard to believe that you would need to load a package full-on at
    run-time in those environments as well.
    Perhaps we're over-thinking this a bit?
    Feel free to outline a system you think will work.

    If you think that means dynamically loading package.json files and
    parsing them, even though the warnings i gave about issues with
    xdomain auth, and that the remote serve may not actually want to be a
    runtime file server for your app, feel free to try that out. Working
    code helps. Dustin's branch is a way to try that stuff out. I think it
    is not a full test since dealing with dojox modules is under our
    control. But perhaps something else can be demonstrated with the
    approach.

    I feel like we keep cycling over the same things, so I will stop
    talking about them for now. I'm going to code now, to see where things
    fall apart.

    James
    _______________________________________________
    dojo-contributors mailing list
    dojo-contributors at mail.dojotoolkit.org
    http://mail.dojotoolkit.org/mailman/listinfo/dojo-contributors
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.dojotoolkit.org/pipermail/dojo-contributors/attachments/20100825/a17d8fa2/attachment.htm

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupdojo-contributors @
categoriesdojo
postedAug 17, '10 at 2:23a
activeSep 4, '10 at 3:58p
posts83
users10
websitedojotoolkit.org

People

Translate

site design / logo © 2021 Grokbase