Initial patch: Stephen Polkowski <stephen@centtech.com>
Added test case and tightened up so that vmread/vmwrite to/from 32-bit reg
is illegal in 64-bit mode.
svn path=/trunk/yasm/; revision=1209
* x86id.re (yasm_x86__parse_cpu): Add cases for "PRESCOTT", "SSE3", and "PNI".
(yasm_x86__parse_check_id): Add cases for all 13 new instructions.
Update FILD and FISTP so that common fildstp_insn can be used.
(lddqu_insn): New instruction format.
* x86arch.h (CPU_SSE3): New. Renumber others.
* sse3.asm, sse3.hex, sse3.errwarn: New test case for SSE3 instructions.
* x86/tests/Makefile.inc: Include test in build.
svn path=/trunk/yasm/; revision=1204
expression processing. A crash is possible if we don't do this.
* nomem64-err.asm, nomem64-err.errwarn: New order forces split of this into:
* nomem64-err2.asm, nomem64-err2.errwarn: move two cases here.
Found by: Ewout Prangsma <ewout@prangsma.net>
svn path=/trunk/yasm/; revision=1194
pages should now be more consistent across systems.
* arch/Makefile.inc: Build yasm_arch.7 at top level, and in dist_man_MANS
instead of man_MANS. Include build rule for building using XMLTO if XMLTO
is available.
* yasm/Makefile.inc: Likewise (for yasm.1).
* Makefile.am: Change man_MANS to dist_man_MANS. Remove general rules for
.xml.1 and .xml.7. Add dist_man_MANS to MAINTAINERCLEANFILES. Remove
man_MANS from BUILT_SOURCES, DISTCLEANFILES, and EXTRA_DIST.
svn path=/trunk/yasm/; revision=1189
deletes the old bytecode, so it's not safe to use anything from the old
bytecode after this point (such as the prefixes array passed to arch
finalize).
* x86bc.c (yasm_x86__bc_apply_prefixes): Take x86_common as parameter
rather than bytecode; add line parameter instead of referencing bc->line.
* x86arch.h (yasm_x86__bc_apply_prefixes): Likewise.
* x86id.re (x86_finalize_jmpfar, x86_finalize_jmp)
(yasm_x86__finalize_insn): Apply prefixes before bytecode transform.
svn path=/trunk/yasm/; revision=1186
dynamically loaded modules into the now-static libyasm. I now anticipate
that there would be very few users of the dynamic loading features, and it
yielded a lot of instability and build headaches for very little benefit.
The new build should now be much more cross-platform and faster (there was
a lot of overhead in finding and loading modules).
* libtool.m4, ltdl.m4: Delete.
* m4/Makefile.am: Rename to m4/Makefile.inc and remove references to above.
Change to use subdirectory (flat) build rather than recursive build.
* Makefile.am: Include m4/Makefile.inc rather than having it in SUBDIRS.
* libltdl: Delete.
* frontends/yasm/yasm-module.c: Delete.
* basename.c, dirname.c: Delete (no longer needed by yasm-module.c).
* genmodule.c, module.in: Generator and template for new module.c included
in libyasm that replaces the old yasm-module.c (module.in is a modified
rename of yasm-module.c).
* module.h: Modified rename of old yasm-module.h.
* libyasm.h: Include libyasm/module.h.
* libyasm/Makefile.inc: Build generator and include module.c in libyasm.
* yasm.c: Use new libyasm module interface.
* (many) Makefile.inc: Remove libtool libraries, build all modules into
libyasm library.
* configure.ac: Remove libtool/libltdl references.
* Mkfiles/vc/yasm-module.c: Remove. Still need to fix some of the other
Mkfiles/ build files for these changes.
svn path=/trunk/yasm/; revision=1183
before finalizing instructions.
* x86arch.h (yasm_x86__bc_apply_prefixes): Don't take num_segregs and
prefixes parameters.
* x86bc.c (yasm_x86__bc_apply_prefixes): Move segreg code to...
* x86id.re (yasm_x86__finalize_insn): Here (only user of this code).
(x86_finalize_jmp): Don't pass num_segregs and prefixes.
* x86arch.h (x86_common): New; refactored common bytecode parameters.
(x86_opcode): New; refactored opcode parameters.
(x86_insn): Refactor with x86_common and x86_opcode.
(x86_jmp): Likewise.
* x86id.re (x86_finalize_common, x86_finalize_opcode): New.
(yasm_x86__finalize_insn, x86_finalize_jmp): Use and update substruct refs.
* x86bc.c (x86_common_print, x86_opcode_print): New.
(x86_bc_insn_print, x86_bc_jmp_print): Use and update substruct refs.
(x86_common_resolve): New.
(x86_bc_insn_resolve, x86_bc_jmp_resolve): Use and update substruct refs.
(x86_common_tobytes, x86_opcode_tobytes): New.
(x86_bc_insn_tobytes, x86_bc_jmp_tobytes): Use and update substruct refs.
(yasm_x86__bc_apply_prefixes): Utilize refactor to simplify.
* bytecode.c (bc_insn_finalize): Simplify (or at least level) immediates
and memory operands before passing them off to yasm_arch_finalize_insn().
This may result in some minor performance improvement on complex static
expressions.
* x86arch.h (x86_parse_targetmod): Add X86_FAR_SEGOFF; this is only
generated due to a SEG:OFF immediate being detected during finalize.
(x86_jmp_opcode_sel): Remove JMP_FAR; this is now a separate bytecode.
(x86_jmp): Remove far opcode.
(x86_jmpfar): New bytecode for far jumps.
(yasm_x86__bc_transform_jmpfar): New.
* x86bc.c (x86_bc_callback_jmpfar): New.
(yasm_x86__bc_transform_jmpfar, x86_bc_jmpfar_destroy): New.
(x86_bc_jmp_print): Move far jump code to..
(x86_bc_jmpfar_print): Here (new).
(x86_bc_jmp_resolve, x86_bc_jmpfar_resolve): Likewise (latter new).
(x86_bc_jmp_tobytes, x86_bc_jmpfar_tobytes): Likewise (latter new).
* x86id.re (OPAP_JmpFar): Remove (detected immediately now).
(jmp_insn, call_insn): Update.
(x86_finalize_jmpfar): New.
(x86_finalize_jmp): Remove far jump-related code.
(yasm_x86__finalize_insn): Add check for SEG:OFF immediate operand, and
apply X86_FAR_SEGOFF if necessary. Match OPTM_Far against both X86_FAR
and X86_FAR_SEGOFF. Add shortcut to x86_finalize_jmpfar().
svn path=/trunk/yasm/; revision=1182
initial parse stage to a new pass between parse and optimization. This
should allow for more accurate generation of arch bytecodes and other future
code simplifications in arch.
This change necessitated changing how bytecodes are extended from the base
yasm_bytecode structure; the original method did not allow for bytecodes to
be reliably changed from one type to another, as reallocation of the base
structure to fit the new type could result in the entire structure being
relocated on the heap, and thus all the pointer references to the original
bytecode being lost. After this commit, the yasm_bytecode base structure
has a void pointer to any extension data. This change rippled across all
bytecode-creating source files, and comprises the majority of this commit.
* bc-int.h (yasm_bytecode): Add contents pointer.
(yasm_bytecode_callback): Make destroy() and print() take void *contents
instead of bytecode pointer.
(yasm_bc_create_common): Take a pointer to contents instead of datasize.
(yasm_bc_transform): New; transforms a bytecode of any type into a
different type.
* bytecode.c (bytecode_data, bytecode_reserve, bytecode_incbin)
(bytecode_align): Remove bc base structure.
(bc_data_destroy, bc_data_print): Update to match yasm_bytecode_callback.
(bc_reserve_destroy, bc_reserve_print): Likewise.
(bc_incbin_destroy, bc_incbin_print): Likewise.
(bc_align_destroy, bc_align_print): Likewise.
(yasm_bc_create_common): Take a pointer to contents instead of datasize.
(bc_data_resolve, bc_data_tobytes, yasm_bc_create_data): Update to use
contents pointer.
(bc_reserve_resolve, bc_reserve_tobytes, yasm_bc_create_reserve): Likewise.
(bc_incbin_resolve, bc_incbin_tobytes, yasm_bc_create_incbin): Likewise.
(yasm_bc_create_align): Likewise.
(yasm_bc_destroy, yasm_bc_print): Update to match yasm_bytecode_callback.
* section.c (yasm_object_get_general, yasm_object_create_absolute): Pass
a NULL pointer instead of yasm_bytecode size to yasm_bc_create_common().
* stabs-dbgfmt.c (stabs_bc_str, stabs_bc_stab): Remove.
(stabs_bc_str_destroy, stabs_bc_str_print): Update.
(stabs_bc_stab_destroy, stabs_bc_stab_print): Likewise.
(stabs_bc_str_callback, stabs_bc_stab_callback): Add common finalize().
(stabs_dbgfmt_append_bcstr): Update to use contents pointer.
(stabs_dbgfmt_append_stab, stabs_dbgfmt_generate): Likewise.
(stabs_bc_stab_tobytes, stabs_bc_str_tobytes): Likewise.
* lc3barch.h (lc3b_insn): Move here from lc3bbc.c.
(lc3b_new_insn_data, yasm_lc3b__bc_create_insn): Remove.
(yasm_lc3b__bc_transform_insn): New.
* lc3bbc.c (lc3b_insn): Remove (moved).
(lc3b_bc_insn_destroy, lc3b_bc_insn_print): Update.
(lc3b_bc_callback_insn): Add common finalize().
(lc3b_bc_insn_resolve, lc3b_insn_tobytes): Use contents pointer.
(yasm_lc3b__bc_create_insn): Remove.
(yasm_lc3b__bc_transform_insn): New.
* lc3bid.re (yasm_lc3b__parse_insn): Directly create lc3b_insn.
* x86arch.h (x86_insn, x86_jmp): Move here from x86bc.c
(x86_new_insn_data, x86_new_jmp_data): Remove.
(yasm_x86__bc_create_insn, yasm_x86__bc_create_jmp): Remove.
(yasm_x86__bc_transform_insn, yasm_x86__bc_transform_jmp): New.
* x86bc.c (x86_insn, x86_jmp): Remove (moved).
(x86_bc_insn_destroy, x86_bc_insn_print): Update.
(x86_bc_jmp_destroy, x86_bc_jmp_print): Likewise.
(x86_bc_callback_insn, x86_bc_callback_jmp): Add common finalize().
(x86_bc_insn_resolve, x86_insn_tobytes): Use contents pointer.
(x86_bc_jmp_resolve, x86_jmp_tobytes): Likewise.
(yasm_x86__bc_create_insn, yasm_x86__bc_create_jmp): Remove.
(yasm_x86__bc_transform_insn, yasm_x86__bc_transform_jmp): New.
* x86id.re (yasm_x86__parse_insn, x86_parse_jmp): Directly create bytecode
contents.
* bc-int.h (yasm_bytecode_callback): Add new finalize() that will finalize
a bytecode after the parse stage.
(yasm_bc_finalize_common): New; common version of bytecode callback
finalize function.
* bytecode.h (yasm_bc_finalize): New wrapper around callback finalize().
* bytecode.c (yasm_bc_finalize): Implementation.
(yasm_bc_finalize_common): Likewise.
(bc_data_callback, bc_data_reserve_callback, bc_incbin_callback)
(bc_align_callback): Add yasm_bc_finalize_common() as finalize() function.
* section.h (yasm_object_finalize): New; finalizes an entire object.
* section.c (yasm_object_finalize): Implementation.
* yasm.c (main): Call yasm_object_finalize() after parse.
* bc-int.h (yasm_effaddr): Add segreg.
* bytecode.h (yasm_ea_set_segreg): New function to set segreg.
* bytecode.c (yasm_ea_set_segreg): Implement.
* x86bc.c (yasm_x86__ea_create_reg, x86_ea_print, x86_bc_insn_resolve)
(x86_bc_insn_tobytes): Use new EA segreg location.
* coretype.h: Move yasm_insn_operands definition from arch.h to here, as it
is now used in prototypes in bytecode.h.
* bytecode.c (bytecode_insn): New instruction bytecode.
(bc_insn_destroy, bc_insn_print, bc_insn_finalize, bc_insn_resolve)
(bc_insn_tobytes): New callback support functions.
(bc_insn_callback): New.
(yasm_bc_create_insn, yasm_bc_insn_add_prefix, yasm_bc_insn_add_seg_prefix):
New parser-callable functions.
* bytecode.h (yasm_bc_create_insn, yasm_bc_insn_add_prefix)
(yasm_bc_insn_add_seg_prefix): Likewise.
* nasm-bison.y: Call new insn bytecode functions rather than arch functions.
* arch.h (YASM_ARCH_VERSION): Bump version.
(yasm_arch): Rename and extend parse_insn to finalize_insn. Remove
parse_prefix, parse_seg_prefix, and parse_seg_override.
(yasm_arch_parse_insn): Rename to yasm_arch_finalize_insn.
(yasm_arch_parse_prefix, yasm_arch_parse_seg_prefix)
(yasm_arch_parse_seg_override): Remove.
* lc3barch.c (yasm_lc3b_LTX_arch): Update to match new yasm_arch.
* lc3barch.h (yasm_lc3b__parse_insn): Rename to yasm_lc3b__finalize_insn.
* lc3bid.re (yasm_lc3b__parse_insn): Likewise.
* x86arch.c (x86_parse_prefix, x86_parse_seg_prefix)
(x86_parse_seg_override): Remove.
(yasm_x86_LTX_arch): Update to match new yasm_arch.
* x86arch.h (yasm_x86__parse_insn): Rename to yasm_x86__finalize_insn.
* x86id.re (yasm_x86__parse_insn): Likewise.
(x86_new_jmp): Rename to x86_finalize_jmp.
* x86arch.h (yasm_x86__bc_apply_prefixes, yasm_x86__ea_init): New.
* x86bc.c (yasm_x86__bc_apply_prefixes, yasm_x86__ea_init): Likewise.
svn path=/trunk/yasm/; revision=1177
similar) to ELF. They are used identically to NASM's ELF shared object
support.
Due to limited WRT support throughout libyasm, this caused a lot of rippling
changes. A major cleanup needs to be performed later to clear some of this
hackiness up.
* elf-machine.h (func_accepts_size_t): Rename to func_accepts_reloc().
(func_map_reloc_info_to_type): Add parameter ssyms for array of special syms.
(elf_machine_ssym): New; for defining machine-specific special syms.
(elf_machine_handler): Change accepts_reloc_size to accepts_reloc. Add new
ssyms and num_ssyms members.
* elf-x86-x86.c (ssym_index): New; this allows nice indexing of ssym arrays.
(elf_x86_x86_accepts_reloc): Rename of elf_x86_x86_accepts_reloc_size. Add
support for various WRT ssyms.
(elf_x86_x86_map_reloc_info_to_type): Add support for various WRT ssyms.
(elf_x86_x86_ssyms): New array of supported special symbols.
(elf_machine_handler_x86_x86): Update for above changes/additions.
* elf-x86-amd64.c (ssym_index, elf_x86_amd64_accepts_reloc)
(elf_x86_amd64_map_reloc_info_to_type, elf_x86_amd64_ssyms)
(elf_machine_handler_x86_amd64): Likewise.
* elf.h (elf_reloc_entry): Add wrt member.
(elf_set_arch): Add symtab parameter.
(elf_is_wrt_sym_relative): New.
(elf_reloc_entry_create): Add wrt parameter.
* elf.c (elf_set_arch): Allocate special syms from machine level.
(elf_is_wrt_sym_relative): New; search special syms, and report whether a
WRT ssym should be symbol-relative or section-relative.
(elf_reloc_entry_create): Pass WRT and ssyms info down to machine level.
* elf-objfmt.c (yasm_objfmt_elf): Add dotdotsym (..sym) symrec member.
(elf_objfmt_create): Pass symtab to elf_set_arch(). Allocate ..sym symbol.
(elf_objfmt_output_reloc): Update for elf_reloc_entry_create() change.
(elf_objfmt_output_expr): Handle WRT ssym. Make relocation symbol-relative
rather than section-relative if either WRT ..sym or WRT ssym that machine
level desires to be symbol-relative.
* symrec.c (yasm_symrec_get_label): Check for NULL sym->value.precbc; this
is now possible due to the user-accessible special symbols that ELF et al
create, which all have NULL precbc's.
* expr.c (yasm_expr_extract_symrec): Recurse into IDENT's to make more exprs
acceptable.
* coretype.h (yasm_output_reloc_func): Remove rel parameter as it shouldn't
be needed and complexifies writing of the reloc functions.
* stabs-dbgfmt.c (stabs_bc_stab_tobytes): Update output_reloc() call.
* elf-objfmt.c (elf_objfmt_output_reloc): Update to match.
* arch.h (yasm_arch_module): Add intnum_fixup_rel() function, change
intnum_tobytes() to not take rel parameter. The rel functionality is being
separated because sometimes it's desirable not to put the data into the
written intnum (e.g. ELF RELA relocations).
(YASM_ARCH_VERSION): Bump due to above change.
(yasm_arch_intnum_fixup_rel): New wrapper.
(yasm_arch_intnum_tobytes): Update wrapper (removing rel).
* lc3bbc.c (yasm_lc3b__intnum_fixup_rel): New, with code from:
(yasm_lc3b__intnum_tobytes): Remove rel code.
* lc3barch.h (yasm_lc3b__intnum_fixup_rel): New.
(yasm_lc3b__intnum_tobytes): Update.
* lc3barch.c (yasm_lc3b_LTX_arch): Reference yasm_lc3b__intnum_fixup_rel().
* x86bc.c (yasm_x86__intnum_fixup_rel): New, with code from:
(yasm_x86__intnum_tobytes): Remove rel code.
* x86arch.h (yasm_x86__intnum_fixup_rel): New.
(yasm_x86__intnum_tobytes): Update.
* x86arch.c (yasm_x86_LTX_arch): Reference yasm_x86__intnum_fixup_rel().
* xdf-objfmt.c (xdf_objfmt_output_expr): Update to use intnum_fixup_rel() /
new intnum_tobytes().
* bin-objfmt.c (bin_objfmt_output_expr): Likewise.
* coff-objfmt.c (coff_objfmt_output_expr): Likewise.
* elf-objfmt.c (elf_objfmt_output_expr: Likewise.
* nasm-listfmt.c (nasm_listfmt_output_expr): Likewise.
* nasm-bison.y: Change precedence of WRT and : operators: instead of being
the strongest binders they are now the weakest. This is needed to correctly
parse and be able to split WRT expressions. WRT handling is still somewhat
of a hack throughout yasm; we'll fix this later.
* x86expr.c (x86_expr_checkea_distcheck_reg): Don't check for WRT here.
(x86_expr_checkea_getregusage): Add new wrt parameter. Use it to handle
"WRT rip" separately from other operators. Recurse if there's a WRT below
the WRT rip; this is to handle cases like ELF-AMD64's elfso64.asm.
(yasm_x86__expr_checkea): Split off top-level WRT's and feed through
separately to x86_expr_checkea_getregusage().
* x86bc.c (x86_bc_insn_tobytes): Ensure the SUB operation for PC-relative
displacements goes BELOW any WRT expression.
(x86_bc_jmp_tobytes): Likewise.
* elfso.asm, elfso.hex, elfso.errwarn: New 32-bit ELF shared object tests.
* modules/objfmts/elf/tests/Makefile.inc: Include in distribution.
* elfso64.asm, elfso64.hex, elfso64.errwarn: New 64-bit ELF shared object
tests. This is not a good example, as the assembled code doesn't work, but
it at least tests the special symbols.
* modules/objfmts/elf/tests/amd64/Makefile.inc: Include in distribution.
svn path=/trunk/yasm/; revision=1168
otherwise the override is ignored and a warning is output, so correct
code is always generated.
Reported by: vclaudepierre@tiscali.fr
Bugzilla Bug: 40
* x86expr.c (x86_checkea_calc_displen): Output warnings for noreg case;
reorder such that displacement length calculation is done if the warning
is output.
* x86bc.c (x86_bc_insn_resolve): If length was forced but checkea had to
override it, save it always (to avoid multiple warnings).
* addrop.hex: Match corrected code generation.
* addrop.errwarn: Include new warning message.
* ea-warn.asm, ea-warn.errwarn, ea-warn.hex: Test new functionality more
in depth (code thanks to bug reporter).
svn path=/trunk/yasm/; revision=1165
a FAR target expression. The create_branch() call could be called before
the expr_copy() call; the former can (and does) delete op->data.val. Move
the expr_copy() call to an earlier statement to force the correct
evaluation order.
Thanks to: HP TestDrive for providing the Itanium system that discovered
this bug.
svn path=/trunk/yasm/; revision=1163
base structure for relocations and using it in all object formats.
* section.h (yasm_reloc): New base relocation type.
* section.c (yasm_section_add_reloc): New function to add a yasm_reloc to a
section.
(yasm_section_relocs_first, yasm_section_reloc_next): New functions to
access yasm_reloc list.
(yasm_reloc_get): New function to get base info from yasm_reloc.
* section.h: New prototypes for above section.c functions.
* section.c (yasm_section): Add relocs and destroy_reloc members.
(yasm_object_get_general): Initialize relocs and destroy_reloc.
(yasm_object_create_absolute): Likewise.
(yasm_section_destroy): Destroy any created relocs.
* xdf-objfmt.c (xdf_reloc): Base off of new yasm_reloc structure.
(xdf_objfmt_output_expr): Update after xdf_reloc changes.
(xdf_objfmt_output_section): Likewise.
(xdf_section_data_destroy): Likewise.
(xdf_section_data_print): Likewise.
* elf.h (elf_secthead): Remove unneeded list of relocs.
(elf_reloc_entry): Base off of new yasm_reloc structure.
* elf.c (elf_reloc_entry_destroy, elf_relocs_create)
(elf_reloc_destroy): Remove.
(elf_reloc_entry_create): Update after elf_reloc_entry changes.
(elf_secthead_append_reloc, elf_secthead_write_relocs_to_file): Take
additional pointer to yasm_section to access new relocations storage and
update for new elf_reloc_entry structure.
(elf_secthead_create): Update after elf_secthead changes.
(elf_secthead_destroy): Likewise.
(elf_secthead_print): Likewise.
* elf-objfmt.c (elf_objfmt_output_reloc, elf_objfmt_output_expr)
(elf_objfmt_output_section, elf_objfmt_output_secthead): Likewise.
* elf.h: Update prototypes for above elf.c changes.
* coff-objfmt.c (coff_reloc): Base off of new yasm_reloc structure.
(coff_objfmt_output_expr): Update after coff_reloc changes.
(coff_objfmt_output_section): Likewise.
(coff_section_data_destroy): Likewise.
(coff_section_data_print): Likewise.
* nasm-listfmt.c (sectreloc, bcreloc): New.
(nasm_listfmt_output_info): Add bcrelocs, next_reloc, next_reloc_addr.
(nasm_listfmt_output_expr): Record relocations in bcrelocs if next_reloc
and next_reloc_addr match the current expr parameters.
(nasm_listfmt_output): Initialize new members of nasm_listfmt_output_info,
and use bcrelocs data generated by nasm_listfmt_output_expr to add reloc
information to list output.
* x86bc.c (x86_bc_jmp_tobytes): Duplicate jmp_target before splitting
SEGOFF (:) pairs. This avoids a memory leak and doesn't destroy the
ability for the bytecode to be converted to bytes again (which is what
happens when listfmt is used).
* yasm.xml, yasm.1: Add documentation for new listfmt-related options
for yasm frontend: -L (--lformat) and -l (--list).
svn path=/trunk/yasm/; revision=1154
control whether the symbol is replaced with the symbol's value (old
behavior), or just replace it with 0 (new optional behavior). The old
behavior is enabled by setting relocate=1.
* expr.h (yasm_expr_extract_symrec): Likewise (and document new behavior).
* elf-objfmt.c (elf_objfmt_output_expr): Use new function (with relocate=1).
* coff-objfmt.c (coff_objfmt_output_expr): Likewise.
* expr.c (yasm_expr_extract_segment): Renamed to yasm_expr_extract_segoff, a
more approprate name given what operator it looks at.
* expr.h (yasm_expr_extract_segment): Likewise.
* x86bc.c (x86_bc_jmp_tobytes): Use new function name.
* expr.c (yasm_expr_extract_seg): New function to remove SEG unary operator.
* expr.h (yasm_expr_extract_seg): Likewise.
* expr.c (yasm_expr_extract_shr): New function to split SHR operator into
left and right halves.
* expr.h (yasm_expr_extract_shr): Likewise.
* xdf.h: New header file describing the newly added Extended Dynamic Object
Format (XDF). Note: GCC-only code.
* xdfdump.c: New utility that uses the format described in xdf.h to
completely dump an XDF file. Note: non-portable code (runs correctly on
little endian machines only).
Neither of these files are currently included in the distribution.
* xdf-objfmt.c: New YASM objfmt module to output XDF format object files.
* modules/objfmts/xdf/Makefile.inc: Add to build.
* modules/objfmts/Makefile.inc: Likewise.
The XDF object format is a blend between COFF and OMF. It is a very simple
object format intended for use by operating system loaders or similar types
of targets. It allows shifted relocations (useful for static interrupt or
page tables), both flat and segment-relative, and the use of the SEG, WRT,
and x86 JMP FAR notations.
Test cases will be committed soon.
svn path=/trunk/yasm/; revision=1149
* x86arch.h (x86_new_insn_data): Add shortmov_op for shortmov post-action.
* x86bc.c (x86_insn): Likewise.
(yasm_x86__bc_create_insn): Copy shortmov_op to instruction.
(x86_bc_insn_resolve): Handle shortmov_op post-action.
* x86id.re (yasm_x86__parse_insn): Set shortmov_op post-action if desired.
* x86id.re (mov_insn): Through reorder and use of new shortmov_op
post-action, change generated code for mov on AMD64. On AMD64, the short
mov (opcode A0/A1/A2/A3), generated when moving to AL/AX/EAX/RAX from an
absolute address (no registers) has a 64-bit size in 64-bit mode. While an
address override can reduce it to 32-bits, automatically generating such an
override does not fit well with the model of not doing anything behind the
programmer's back. Instead, we now generate the 32-bit address size MOD/RM
form unless the address size is specifically set to 64 bits using [qword 0]
notation (this is the equivalent of the GNU AS movabs pseudo-instruction).
The short mov is still generated in 32-bit mode, whether obtained via BITS
setting or an a32 prefix in BITS 64 mode. (The a32 prefix handling
necessitated the new shortmov post-action.) Examples (pulled from new
mem64.asm):
mov ax, [0] ; 66 8B 04 25 00 00 00 00
mov rax, [dword 0] ; 48 8B 04 25 00 00 00 00
mov al, [qword 0xfedcba9876543210] ; A0 10 32 54 76 98 BA DC FE
mov al, [0xfedcba9876543210] ; 8A 04 25 10 32 54 76 (+ warning)
a32 mov rax, [0] ; 67 48 A1 00 00 00 00
* mem64.asm: Update test to match code changes.
* mem64.hex: Likewise.
* mem64.errwarn: Likewise.
Related to: Bugzilla Bug 33
Reported by: Jeff Lawson <jlawson-yasm@bovine.net>
svn path=/trunk/yasm/; revision=1134
a non-multiplier used after a multiplier (e.g. edi*8+eax). Previously,
this resulted in the eax overriding the edi as the indexreg, causing the
effective address not to be recognized.
Update the effaddr testcase to test this case and a few other similar ones.
Bugzilla bug: 31
Reported by: vclaudepierre@tiscali.fr
svn path=/trunk/yasm/; revision=1106
64-bit immediate. Actually, whether it's signed or unsigned seems to be
uncertain; AMD64 documentation shows it as signed, but Intel's new IA-32e
says it's unsigned! While we're here, the Imm8 version is signed, not
unsigned.
Bugzilla bug: 30
Reported by: Michael Ryan <michaelryan@mindspring.com>
svn path=/trunk/yasm/; revision=1105
in 64-bit (AMD64) mode. Intel says these bytes should not be treated as
prefixes, but AMD64 treats them as legacy prefixes, expecting them to come
before the REX byte.
For now, keep the three-byte max instruction length (although it's not truly
correct), as handling the other "3-byte" cases such as R/M spare with no EA
is probably more painful than it's worth to push down to later in the code
generation path.
Reported by: Henryk Richter <henryk.richter@comlab.uni-rostock.de>
svn path=/trunk/yasm/; revision=1094
- PEXTRW
- MOVMSKPD
- MOVMSKPS
- PMOVMSKB
Reported by: Vivek Mohan <vivekm@phreaker.net>
Also, while we're here, fix all CVT* SIMD instructions; they were variously
incorrect (mostly with regards to accepted operands).
New testcase, simd-2, tests all the above (output verified with GNU objdump).
svn path=/trunk/yasm/; revision=1093
"functions" that call down to the module level. Doesn't really change the
internal complexities, just makes it easier to read and write code that
uses it.
svn path=/trunk/yasm/; revision=1080
first parameter (before MOD_Op2Add). Before this change, MOD_Gap0 did not
eat a parameter until AFTER MOD_Op2Add.
Reported by: Edouard Gomez <ed.gomez@free.fr>
svn path=/trunk/yasm/; revision=1076
pcmp* family. The first three had some operand encoding problems, and pcmp*
was typoed as pacmp*.
Reported by: Edouard Gomez <ed.gomez@free.fr>
svn path=/trunk/yasm/; revision=1072
As yasm has evolved, various minor additions have been made to libyasm to
support the new features. These minor additions have accumulated, and
some contain significant redundancies. In addition, the core focus of
yasm has begun to move away from the front-end commandline program "yasm"
to focusing on libyasm, a collection of reusable routines for use in all
sorts of programs dealing with code at the assembly level, and the modules
that provide specific features for parsing such code.
This libyasm/module update focuses on cleaning up much of the cruft that
has accumulated in libyasm, standardizing function names, eliminating
redundancies, making many of the core objects more reusable for future
extensions, and starting to make libyasm and the modules thread-safe by
eliminating static variables.
Specific changes include:
- Making a symbol table data structure (no longer global). It follows a
factory model for creating symrecs.
- Label symbols now refer only to bytecodes; bytecodes have a pointer to
their containing section.
- Standardizing on *_create() and *_destroy() for allocation/deallocation.
- Adding a standardized callback mechanism for all data structures that
allow associated data. Allowed the removal of objfmt and
dbgfmt-specific data callbacks in their interfaces.
- Unmodularizing linemgr, but allowing multiple linemap instances (linemgr
is now renamed linemap).
- Remove references to lindex; all virtual lines (from linemap) are now
just "line"s.
- Eliminating the bytecode "type" enum, instead adding a standardized
callback mechanism for custom (and standard internal) bytecode types.
This will make it much easier to add new bytecodes, and eliminate the
possibility of type collisions. This also allowed the removal of the
of_data and df_data bytecodes, as objfmts and dbgfmts can now easily
implement their own bytecodes, and the cleanup of arch's bytecode usage.
- Remove the bytecodehead and sectionhead pseudo-containers, instead
making true containers: section now implements all the functions of
bytecodehead, and the new object data structure implements all the
functions of sectionhead.
- Add object data structure: it's a container that contains sections, a
symbol table, and a line mapping for a single object. Every former use
of sectionhead now takes an object.
- Make arch interface and all standard architectures thread-safe:
yasm_arch_module is the module interface; it contains a create()
function that returns a yasm_arch * to store local yasm_arch data; all
yasm_arch_module functions take the yasm_arch *.
- Make nasm parser thread-safe.
To be done in phase 2: making other module interfaces thread-safe. Note
that while the module interface may be thread-safe, not all modules may be
written in such a fashion (hopefully all the "standard" ones will be, but
this is yet to be determined).
svn path=/trunk/yasm/; revision=1058
[rip+symbol] in that the latter adds the symbol offset to rip, whereas the
former is the same as [symbol] but uses rip-relative addressing. This is
a minor overload of the WRT operator, but reads well and shouldn't conflict
with the use of WRT against sections.
Doing this currently adds a bit of overhead to all effective addresses in
64-bit mode (a $ symbol reference). This is the cleanest approach I could
figure out; a time/space trade could be made later, such as prescanning for
RIP usage before allocating the symbol.
svn path=/trunk/yasm/; revision=1032
generation code wasn't seeing it because it wasn't looking at the modified
opersize.
Bug noticed by: Antoine Leca <antoine64leca@unknown> (x86-64 discuss ML)
svn path=/trunk/yasm/; revision=1022
module users to ensure the module interface they're using matches the
interface the module was compiled with. The #define YASM_module_VERSION
should be incremented on every functional change to the module interface.
svn path=/trunk/yasm/; revision=1021
differentiate e.g. AMD64 from x86. Doesn't prohibit anything in x86 yet,
but does standardize unsupported warnings across objfmts (most objfmts will
not support all machines and/or all architectures).
svn path=/trunk/yasm/; revision=1020
use of configure.ac's --enable-warnerror, which was set up to disable
conversion errors due to flex's warning-prone generated code. As we no
longer use flex, fix configure.ac to not disable conversion errors.
svn path=/trunk/yasm/; revision=1018