Many professions maintain a code of conduct. Doctors must swear by the hypocratic oath. Lawyers must uphold their jurisdiction’s ethics rules. Politicians and military often swear to uphold the constitutional documents of their country. Even Starbuck’s baristas must abide my health and safety regulations. There are rules for almost every field, but not quite all fields.

As a lawyer, I must maintain high standards of ethics, but when our firm deals with intellectual property (IP) we interact with a sector which is ill served by ethics or codes of conducts: the creation of intellectual property.

This is perhaps best noted, and frequently, by an examination of an example from the United States. During World War II, the Manhattan Project in New Mexico researched, developed, and assisted in the use of the first atomic bomb. While obviously a weapon, its creators, like J. Robert Oppenheimer who directed the project, later experienced intense doubt about the purpose for which their invention had been used.

“I have become death, the destroyer of worlds,” a famously cited line from India’s Bhagavad-Ghita, reflected Oppenheimer’s severe concern that he had brought a terrible evil into the world. Often, scientists have faced this dilemma, whether to pursue their ideas which may be used to commit violence or terror, or to seek some other, safer avenue of pursuit that will not be so attractive to those who would use them for ill.

In an article published in the most recent edition of the magazine wired, Jackie Snow relates the efforts of a Native American named Amelia Winger-Bearskin to address this issue in one specific sector of the scientific community: software development.

Programmers create code in various computer languages and for various reasons. Oftentimes previously created code is copied and pasted to achieve the purposes of a programmer without having to laboriously recreate that code. Some of the oldest computer languages such as Unix and C++ have a potential database of existing code that stretches for decades. And many programs and systems today make use of that code.

But there is no extant ethical code of conduct for computer programmers, no way for them to monitor the use of their code after it has left their hard drive. This is especially true of programmers who have signed IP use agreements with their employers. These IP use agreements are often very one sided and allow the companies to own the IP created by their employees, thus removing any possible limitations on the use of the IP by its creators.

This is very much the situation in the United States and other jurisdictions where software code can be copyrighted. This means that the coder is creating a work for hire, and they cannot exercise ethical control over the code’s use.

Despite its absence, Winger-Bearskin suggests that there should be. She offers a model that may lend itself to the creation of an opportunity for coders to, at a minimum, indicate their desires regarding the code they create.

Her model is akin to the readme files that developed early in programming’s history to provide users a guide for the use of the software as well as some of the purposes of specific code language. Winger-Bearskin suggests such an attachment could be included with software code to reflect the preferred uses as desired by the coder. This would allow coders to express whether they want their code to be used for government purposes, weapons development, or other purposes entirely.

While such addition is not necessarily a code of conduct, it would provide coders to give their input into the use and abuse of their code. The problem is, however, that a readme file filled with ethical guidance from the coder would have no enforceable prohibitions on the coder’s employer or others who gain access to the code. Nor would it act as an industry wide governance that all coders could apply to their programs. Without developing such a code, Winger-Bearskin’s idea will bear few teeth.

However, an unrelated development might offer some form of enforcement for a coder’s ethical interests.

As I’ve discussed (see Blockchains in Vietnam), smart contracts are contracts embedded into blocks of a chain, usually Ethereum, that will, upon the performance of a specific action, perform the counterparty’s responsibility automatically. The common example is a vending machine. The offer comes from the owner of the vending machine in the cost listed for the soda or snack. It is accepted when the buyer inserts payment into the machine. And without any further human interference, the soda or snack is delivered, thus concluding the contract.

In theory, programming code could be dumped onto a block with the readme file available as a definition of offer, similarly to an NFT. A smart contract would then require the prospective user to make a legally binding commitment that they will abide by the readme’s contents. Once that acceptance is granted, the buyer would then automatically receive the code for use. This would also allow for the inclusion of fees for purchase.

Such a system would create a new gig-economy for software programmers and allow them to have greater control over the use of their code. It may not be perfect, but it might be a way for programmers to remove themselves from the tyranny of work-for-hire contracts that prevent them from otherwise gaining from their intellectual property.