PerlBuildSystem

 view release on metacpan or  search on metacpan

Todo.txt  view on Meta::CPAN

_builders using shell commands only should automatically generate warp2 stubs
	=> what about adding it to the tree through a node sub?
		=> meta rules are a bit more difficult
	=> getting it as a result from the builder
		=> builders can serialize themselves
		=> need a special mode when building Warp2 data
			for nodes already build
			
^when merging nodes in c_depender, shouldn't we set dependent too get ancestors properly?
	=> yes, __DEPENDENCY_TO should be set or we get an error.

^C file was removed when it couldn't be depended !!!!!
	made an error in the include path, forgot -I
	
Change ImportTrigger(file) to Import(file, Trigger) ;

...trigger inserted nodes are searched in $inserted_nodes only this means that trigger
	inserted nodes from a sub pbs with LOCAL_NODES will NOT be found!
	=> keep all the LOCAL_$inserted_nodes in a list
		this would make it easy to know how many nodes we really have in the tree
		warp 1.1 wouldn't neet to traverse a tree (hmmm, how do we rebuild multiple trees ?)
		The original $inserted nodes should be in the list too
		PBS could push $inserted nodes in the list when is is not defined .?

Remove spurious references to full path C files introduced by depender

make warp2 give the possibility to modify the warp tree without running the subpbs
make warp2 -j and - distributor aware
rewrite the rule generator so shell only commands can be warp2 enabled automatically

warp2 tries to regenerate the sub builder for a node
warp2.5, tries to use the wapr2 nodes within a normal pbs

... --load as for make

More resilience if a build  dies without returning any error
	time out? multiple_timeout and then kill?
	Restart the build? what if the shell was a fixed one?
	we must keep info on which node was being build
	
Catch ^C when building and die nicely
	=> show what builders we are killing and what they were building
	=> generate warp file
		=> complicated as warp needs the inserted files list and C can happend at any place
		=> none of the build files will get their md5, this means next run will be slow anyhow
	
distributor should be more dynamic than just handing a shell, they should be called when scheduling a node too.


...Should the build directory be based on a warp signature?
	on hold till we decide how to differenciate them for the user
		md5 is not user friendly
	what if we want to reuse most of what is in the out directory even
		if the config is slightely diffrent

what about a specialized filesystem to speedup dependency?
	we "serialize" the dependency tree and let the filesystem do the triggering
	this allows us to very fast know if a whole subtree has been modified or not
	Problems to be fixed:
		How do we serialize?
			FUSE FAm nullfs overlayfs portalfs
			
			each warp has a unique signature, we can keep a list of dependent in a warp signed file
			each file in a dependency file get a 'special' file named: warp_sig +  filename (ie X) in the
				directory where the file is. The fs checks that file (and all other special file with
				other warp sig for file X). The trace file (the special file) for a given warp signature 
				contains a list of all the dependent to file X and the location where to write the trace data
				if file X was to be changed. It could also contain the original md5 of file X.
				It would make sense to have the trace file to be a perl script that is evaled from the fs,
				this would be effective if we embed a perl interpreter into the fs, this would free the fs
				from the logic internals. All that need to be done is eval a file if it exists.
				
				If a perl evaluable trace file solution is choosen, this could be done:
					- check the md5 before doing anything else
					- write the list of dependent where the original warp file
						warp3 would evaluate the file to removes the nodes that need rebuild
					  alternatively we could link/move the signature file to avoid writting alltogether
					- a background build can be started  

				trace file should fire if the file is removed too
				
				? Could we remove the trace file? This would "trigger" a build just once.

		Multiple users, project, configs. Sometimes a single project might have multiple configs
			=> no problems with these says ALI
			
		?How do we detect if a files is fiddled with outside the fs control
			we can't gurantee it  => md5
			use warp1
			disable access in normal fs
			
	@ since we follow warp signature, we can find the sytem that had the dependencies
		the sytem needs to be rebuild when one of the dependencies is changed
		this means that we can start a build when a dependency changes thus having an uptodate
		system (for any of the builds on the fs)
		
		if no auto build is attempted, it is easy to write a tool that finds all the trace file that
		have been moved to the warp file's location thus listing the projects that need rebuild

Warp signature should depend on all the configs not only a choosen few
	problem with PBS setting data in config after warp checked the config
		=> PBS shouldn't do that and should require the config to be set properly befor pbs() is called
		=> we could separate setup data from run time data and generate signature from setup only
	if the config is set properly to start with, the warp signature computation always returns the same value
		=> cache the warp signature => cache the filename directely
	shouldn't we be dependent ont prfs content instead for its name
		we shouldn't use it at all but use the config instead after it has included the prf contents
	... warp file must md5 itself to avoid warp file corruption
		put this in a separate file
		? isn't this overkill ? who is going to change a warp file?
	#=> move $number_nodes_in_DT to the end and check for defined or not
      
Check the header file override problem
	must add include path
		=> change in the pbsfile
		=> must be documented

Can we distribute the pre-processing for distcc
	1/ let n boxes run pbs till the dependency tree is buils on all boxes (fs are synched)
		all boxes get the same build sequence
		the master builder gives a slice of nodes to build to the slave builders



( run in 0.579 second using v1.01-cache-2.11-cpan-39bf76dae61 )