Path: csiph.com!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!eternal-september.org!.POSTED!not-for-mail From: pozz Newsgroups: comp.arch.embedded Subject: Re: Improving build system Date: Fri, 16 May 2025 12:46:56 +0200 Organization: A noiseless patient Spider Lines: 355 Message-ID: <100752v$3kivm$1@dont-email.me> References: <1001m9t$2drv1$1@dont-email.me> <100338u$2c42e$1@dont-email.me> <1004all$3218k$1@dont-email.me> <1005m3f$3aqfb$1@dont-email.me> <1006vho$3miqf$1@dont-email.me> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Fri, 16 May 2025 12:46:58 +0200 (CEST) Injection-Info: dont-email.me; posting-host="b1ab8f7032ac741dfc92317180d6ca90"; logging-data="3820534"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19CulNpOjOLMCUPGZf1SD821J6DGJelOfk=" User-Agent: Mozilla Thunderbird Cancel-Lock: sha1:FPhvcJsIw6/34gpBzdSkTjnJGHw= In-Reply-To: <1006vho$3miqf$1@dont-email.me> Content-Language: it Xref: csiph.com comp.arch.embedded:32430 Il 16/05/2025 11:12, David Brown ha scritto: > On 15/05/2025 23:25, pozz wrote: >> Il 15/05/2025 11:03, David Brown ha scritto: >>> On 14/05/2025 23:51, pozz wrote: >>>> Il 14/05/2025 11:03, David Brown ha scritto: >>>>> On 13/05/2025 17:57, pozz wrote: >>>> [...] >> > >> I worked on PIC8 and AVR8 and IMHO AVR8 is much better then PIC8. >> Regarding Cortex-M, SAM devices are fine for me. > > The 8-bit PIC's are extraordinarily robust microcontrollers - I've seen > devices rated for 85 °C happily running at 180 °C, and tolerating > short-circuits, over-current, and many types of abuse.  But the > processor core is very limited, and the development tools have always > been horrendous.  The AVR is a much nicer core - it is one of the best > 8-bit cores around.  But you are still stuck working in a highly > device-specific form of coding instead of normal C or C++. Why do you write "highly device-specific form of coding"? Considering they are 8-bits (and C is at-least-16-bits integer), it seems to me an acceptable C language when you coimpile with avr-gcc. You can use int variables without any problems (they will be 16-bits). You can use function calls passing paramters. You can return complex data from functions. Of course flash memory is in a different address space, so you need specific API to access data from flash. Do you know of other 8-bits cores supported better by a C compiler? > And you are > still stuck with Microchip's attitude to development tools.  (You can > probably tell that I find this very frustrating - I would like to be > able to use more of Microchip / Atmel's devices.) Maybe we already talked in the past about this. I don't know if avr-gcc was developed by Atmel or Arduino community. Anyway, for AVR8 you have the possibility to use gcc tools for compiling and debugging. There are many open source tools. I think you could avoid completely Microchip/Atmel IDE for AVR8 without any problems. Arduino IDE is a good example. >>>>> 2. >>>>> >>>>> You don't need to use bash or other *nix shells for makefile or >>>>> other tools if you don't want to.  When I do builds on Windows, I >>>>> run "make" from a normal command line (or from an editor / IDE). >>>>> It is helpful to have msys2's usr/bin on your path so that make can >>>>> use *nix command-line utilities like cp, mv, sed, etc.  But if you >>>>> want to make a minimal build system, you don't need a full msys2 >>>>> installation - you only need the utilities you want to use, and >>>>> they can be copied directly (unlike with Cygwin or WSL). >>>>> >>>>> Of course you /can/ use fuller shells if you want.  But don't make >>>>> your makefiles depend on that, as it will be harder to use them >>>>> from IDEs, editors, or any other automation. >>>> >>>> In the beginning (some years ago) I started installing GNU Make for >>>> Windows, putting it in c:\tools\make.  Then I created a simple >>>> Makefile and tried to process it on a standard Windows command line. >>>> It was a mess!  I remember there were many issues regarding: >>>> slash/backslash on file paths, lack of Unix commands (rm, mv, ...) >>>> and so on.  Native Windows tools need backslash in the paths, but >>>> some unix tools need slash.  It was a mess to transform the paths >>>> between the two forms. >>>> >>> >>> Most tools on Windows are happy with forward slash for path >>> separators as well. >> >> mkdir, just to name one?  And you need mkdir in a Makefile. > > Don't use the crappy Windows-native one - use msys2's mkdir.  As I said: > > bin_path := > RM := $(bin_path) rm > MKDIR := $(bin_path) mkdir > > and so on. > > Now your makefile can use "mkdir" happily - with forward slashes, with > "-p" to make a whole chain of directories, and so on. Yes, sure, now I know. I was responding to your "Most tools on Windows are happy with forward slash". I thought your "tools on Windows" were native Windows commands. I think your suggestion is: explicitly call msys tools (rm, mkdir, gcc) in normal Windows CMD shell, without insisting in using directly the msys shell. Maybe this will help in integration with third-parties IDE/editors (such as VSCode, C::B, and so on). > Once you have left the limitations of the Windows default command shell > builtins behind, it is all much easier.  For utilities like "cp" and > "rm" it is a little more obvious since the names are different from the > DOS leftovers "copy" and "del" - unfortunately "mkdir" is the same name > in both cases. Indeed. >>> Certainly everything that is originally a *nix tool will be fine with >>> that. >>> >>> Of course if you have a makefile that uses commands like "rm" and you >>> don't have them on your path, and don't specify the path in the >>> makefile, then it won't work.  This is why the norm in advanced >>> makefiles is to use macros for these things : >>> >>> # Put this in the host-specific file, with blank for no path needed >>> bin_path := >>> >>> # Use this instead of "rm". >>> RM := $(bin_path) rm >> >> Initially I insisted using native Windows commands: DEL, MKDIR, COPY >> and so on.  Finally I gave up. >> > > Excellent decision. > >> >> >>>> After this attempt, I gave up.  I thought it was much better to use >>>> the IDE and build system suggested by the MCU manufacturer. >>>> >>> >>> For most IDEs, the build system is "make".  But the IDE generates the >>> makefiles - slowly for big projects, and usually overly simplistic >>> with far too limited options. >>> >>> But IDE's are certainly much easier for getting started.  On new >>> projects, or new devices, I will often use the IDE to get going and >>> then move it over to an independent makefile.  (And I'll often >>> continue to use the IDE after that as a solid editor and debugger - >>> IDE's are generally happy with external makefiles.) >> >> I'm going to create a new post regarding editors and debugger... stay >> tuned :-D > > You are keeping this group alive almost single-handedly :-)  Many of us > read and answer posts, but few start new threads. I'm the student, your are the teachers, so it is normal I make the questions :-D [OT] I like newsgroups for chatting with others on specific topics. Nowadays unfortunately newsgroups are dying in favor of other social platforms: Facebook, reddit, blogs.... Do you know of some other active platforms about embedded? >>>> Now I'm trying a Unix shell in Windows (msys, WSL or even the bash >>>> installed with git) and it seems many issues I had are disappearing. >>>> >>>> >>>>> And of course you will want an msys2/mingw64 (/not/ old mingw32) >>>>> for native gcc compilation. >>>> >>>> The goal of the simulator is to detect problems on the software that >>>> runs directly on Windows, without flashing, debug probes and so on. >>>> I increased my productivity a lot when I started this approach. >>>> >>>> Obviously, the software running on Windows (the simulator) should be >>>> very similar to the sofware running on the embedded target. >>>> Cortex-M MCUs are 32-bits so I thought it should be better to use a >>>> 32-bits compiler even for the simulator. >>>> >>> >>> mingw-w64 can happily generate 32-bit Windows executables.  IIRC you >>> just use the "-m32" flag.  It is significantly better than old mingw >>> in a number of ways - in particular it has vastly better standard C >>> library support. >> >> Why doesn't it work for me?  I open a Msys2/mingw64 shell and... >> >> $ gcc -m32 -o main.exe main.c >> C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: skipping incompatible C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/lib/libmingw32.a when searching for -lmingw32 >> C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: skipping incompatible C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/lib\libmingw32.a when searching for -lmingw32 >> ... >> ... and much more >> > > It looks like you don't have the 32-bit static libraries included in > your msys2/mingw64 installation - these things are often optional.  (It > might be referred to as "multi-lib support".)  I haven't used gcc on > Windows for a long time - most of my work is on Linux.  But I'm sure > that you'll find the answer easily now you know it is the 32-bit static > libraries (libmingw32.a) that you are missing. On many places they suggest to use msys2/mingw32 for generating 32-bits Windows binaries. For example here[1]. [1] https://superuser.com/questions/1473717/compile-in-msys2-mingw64-with-m32-option >>>> I guess the only goal of host_xxx.mk is to avoid changing PATH >>>> before make.  Why don't you like setting the PATH according to the >>>> project you're working on? >>>> >>> >>> No, that is not the only goal - there can be many differences between >>> machines.  For example, I usually have ccache on my Linux systems but >>> it is rare to have it on (native) Windows systems - thus that can be >>> enabled or disabled in a host_xxx.mk file.  Some machines might also >>> support building the documentation, or running a simulator, or >>> signing binaries. >>> >>> Setting the path would be an extra complication of no benefit, but a >>> significant source of risk or error.  How do you make sure your IDE >>> is using the right PATH settings before it runs "make"?  How do you >>> deal with multiple projects - do you keep swapping PATHs?  (I usually >>> have a half-dozen projects "open" at a time, in different workspaces >>> on my Linux machine.)  Do you now have a makefile and a separate >>> path-setting batch file or shell script that you need to run before >>> doing a project build?  How do you handle things when you install >>> some new Windows program that messes with your path? >>> >>> It is /vastly/ simpler and safer to put the paths to the binaries in >>> a couple of macros in your makefile(s).  It also gives clear and >>> unequivocal documentation of the tools you need -  if your makefile >>> has this line : >>> >>> toolchain_path := c:/micros/gcc-arm-none-eabi-10_2020-q4-major/bin/ >>> >>> then there is never any doubt as to exactly which toolchain is used >>> for the project. >> >> I see your points.  The only drawback seems putting a bunch of >> host_xxx.mk files in the repository.  If the developer team and their >> development machines are well defined and static, everything goes well. >> > > Typically the host_xxx.mk files will be pretty much the same for each > Windows system and each Linux system.  You might find it simpler to just > have a single file that checks for the OS and sets the paths > specifically, without bothering about host details. > >> However what happens when a new developer pulls your repository and >> want to build?  At first, he must create his host_xxx.mk and starting >> polluting the original repository.  Instead, by using the PATH, it >> could build without touching any files in the repo. > > How often does a new developer join the team - or how often do you add a > new host?  If it is once every few years, it doesn't matter.  If it > happens regularly, then this will be a pain and you might want to have a > different scheme (such as common setups on all Linux systems and all > Windows systems).  But using the PATH is much worse IME. Ok, this makes sense. >> Maybe this isn't our situation, but a public open-source repository >> can't use your approach.  It's impossible to include in the public >> repository tenths or hundreds host_xxx.mk. > > Sure. > > That's a completely different kind of project, however.  In open source > projects you'll want to make the system compilable with a wide range of > tools, versions and options, and you expect a lot of varied changes to > the code.  That's entirely different from a serious commercial embedded > system where you want to be able to make a release of the project and > check it in, then ten years later check it out on a different machine > and OS, do a rebuild, and get bit-perfect identical binaries.  I am not > suggesting a one-size-fits-all solution. > >> >> Moreover, what happens if two developers like astronomy and set the >> hostname of their development machine JUPITER?  Maybe one uses Linux, >> the other Windows. > > Use your imagination :-) > >> In your make, it seems you include the correct host_xxx.mk file >> automatically from the hostname. >> > >>>>> 6. >>>>> >>>>> Learn to use submakes.  When you use plain "make" (or, more >>>>> realistically, "make -j") to build multiple configurations, have >>>>> each configuration spawned off in a separate submake.  Then you >>>>> don't need to track multiple copies of your "TARGET" macro in the >>>>> same build - each submake has just one target, and one config. >>>> >>>> I don't think I got the point.  Now I invoke the build of a single >>>> build configuration.  Are you talking about running make to build >>>> multiple configurations at the same time? >>> >>> Yes. >>> >>> Obviously it depends on the stage you are in development and the kind >>> of project - much of the time, you will want to build just one >>> configuration.  But sometimes you will also want to make multiple >>> builds to check that a small change has not caused trouble elsewhere, >>> or for different kinds of testing?  Why run multiple "make" commands >>> when you can do a full project build from one "make" ? >> >> Are you thinking something similar to: >> >> all_configs: >>      $(MAKE) -j 4 CONFIG=FULL >>      $(MAKE) -j 4 CONFIG=STANDARD >>      $(MAKE) -j 4 CONFIG=LITE >> > > Don't use "-j" on the submakes - just use "$(MAKE)" and it will inherit > the job count from the first instance, which acts as a the jobserver. > >> >> With my actual Makefile, "make all_configs" returns an errore because >> CONFIG is not specified. > > You could put something like : > > CONFIG ?= FULL > > to give a default configuration. > > I actually have something like : > > ifneq "$(submake)" "1" >   # This is the original main make, used only to start the sub-makes >   # "progs" is a list of the programs, or configurations, to build > >   # Get any non-prog goals >   goals := $(filter-out $(all_progs),$(MAKECMDGOALS)) > >   define submake_template >     # $(1) = prg >     .PHONY : $(1) >     $(1) : >       @echo Spawning submake for $(1) >       +$(MAKE) --no-builtin-rules $(goals) prog=$(1) submake=1 >   endef >   $(foreach prg,$(prog),$(eval $(call submake_template,$(prg)))) > else >   # We are in the sub-make for a configuration >   include makes/main.mk > endif > > Thus the only thing that is done from the original instance of "make" is > to start as many submakes as appropriate, each with a specific CONFIG > and with the submake variable set. Ok, thank you.